View Issue Details
ID | Project | Category | View Status | Date Submitted | Last Update |
---|---|---|---|---|---|
0001395 | ardour | features | public | 2007-01-03 07:26 | 2007-01-18 07:34 |
Reporter | mtaht | Assigned To | |||
Priority | normal | Severity | minor | Reproducibility | always |
Status | assigned | Resolution | open | ||
Summary | 0001395: "Disk was not able to keep up with Ardour" dialog | ||||
Description | It would be nice if one and only one of these dialogs ever popped up. As it is, I fast forward ardour and get one, then fast forward again, and get another, then I have a dozen I have to get rid of, sooner or later. I wouldn't mind at all if you didn't have to press ok to get rid of it either. Have it disappear in 20 minutes.... (I am getting more ram and disk shortly so I hope to never see this again myself, but others may not be so lucky) This bug has kind of migrated into improving disk I/O rather than the dialog. Still will improve the dialog. | ||||
Tags | No tags attached. | ||||
|
Due to how ardour handles interleaves writes, if you do a copy of your ardour project's directory to elsewhere, you will end up with a suboptimal ardour layout for the packing of the files, and a lot of extra head movement. This message dialog shows up a lot when fastforwarding or reversing and you overrun the disk cache. the problem (not the dialog) also shows up on exporting. On one project of mine in this scenario (10 tracks, a LOT of overdubs), ardour is only using 60% of cpu because it's too bottlenecked on disk. In my limited benchmarking, striping your drives in that case doesn't help AT ALL for large ardour projects. You just end up with all the drives doing a lot of head movement. (tried with a 256k stripe and a 32K stripe). There are four solutions. The general purpose one would be a script: "ardourcp" that copies over the correct files in roughly the order and interleave they are stored in the ardour XML file. (I started writing that but got bogged down in the perl XML interfaces) Second solution would be "use enough ram to store your entire project", which in my case would be 32GB. third solution would be to have the disk thread have multiple threads. fourth solution would maybe involve moving to using aio in the diskcache thread. Theoretically current versions of aio can do buffered reads/writes but it's not clear if the userspace libraries have caught up yet. This would give the kernel a lot more information to work with - in the case of the above project, there would be a dozen or more kernel requests outstanding at once, and the head optimising algorithms would optimize movement correctly. and would scale well on SMP architectures. Downsides are many on a single cpu. |
|
Filesystem level aio is available as a patch to the kernel and looks like it's going into mm. http://marc.theaimsgroup.com/?l=linux-aio&m=116786267018351&w=2 Still, uncached aio seems like a win (especially when going in reverse) I'm not huge on the overarching libraries, the lowest layer seems clean enough to me. (io_submit, io_getevents) and, in my limited testing, actual work on files opened without O_DIRECT. |
|
one way to make ffw and rew errors less invasive would be to arbitrarily slow down the transport when the error happens when the speed > 1 in either direction, say by 1/4 or 1/2. Ardour does fine ffw and rev when the data is all cached, but it does go screwy whenn moving faster and it overruns the disk cache. |
Date Modified | Username | Field | Change |
---|---|---|---|
2007-01-03 07:26 | mtaht | New Issue | |
2007-01-16 23:45 | mtaht | Note Added: 0003074 | |
2007-01-16 23:47 | mtaht | Note Edited: 0003074 | |
2007-01-16 23:51 | mtaht | Note Edited: 0003074 | |
2007-01-17 11:15 | mtaht | Note Added: 0003076 | |
2007-01-17 11:16 | mtaht | Severity | trivial => minor |
2007-01-17 11:16 | mtaht | Status | new => assigned |
2007-01-17 11:16 | mtaht | Category | bugs => features |
2007-01-17 11:16 | mtaht | Product Version | => SVN |
2007-01-17 11:16 | mtaht | Description Updated | |
2007-01-18 07:34 | mtaht | Note Added: 0003084 |