Uploading assets is slow and gets slower as your assets library grows bigger. I’ve seen 1 to 3 seconds per file, even for very small files on fast internet connections.
This issue is known and being actively worked on. A change in the database backend is required and should speed up everything, making uploads much faster.
This is done, changed status to closed.
Database design was slightly changed and operations were overhauled to make all accesses as snappy as possible. With the new object storage server as well, it has make everything a lot more efficient and faster to upload.
No, the database design makes this a lot faster and there is no chance of it happening again. It’s not yet as optimal as I would like because each file is uploaded separately instead of having the ability to do mass uploads of multiple files at once when they are very small files (tokens for example), so there’s a latency introduced with each file and having to wait for confirmation. That should be improved soon (See Forge Feature Roadmap) but it’s not as bad as it used to be. Accessing assets database would be in under 10 milliseconds now, instead of the 1 to 3 seconds that I had before.
I guess as an example, it takes me about 10-20 seconds to upload 72 files that are 225 MB in size if i do it through FileZilla to an Amazon server of my own. Using forge, this takes me about 5 minutes. I’m not sure where the bottleneck is but it’s not insignificant.
Are you saying that gap will close significantly with new features?
Yes, that’s a different issue. The upload of an asset isn’t slow, it’s the mass import of multiple assets that is.
FileZilla will stream multiple files at once or send them in a single continuous stream, and it has a single connection to the server. In the case of Forge assets, it’s doing a separate connection for every file and uploading it, waiting for success confirmation then doing the next file. Don’t forget that for each file you upload to the server, the server itself then needs to upload it to the geo-distributed S3 storage before confirming reception. Although that’s faster than your upload since the two servers are in the same datacenter, it still requires one additional connection to be established.
How long would it take if you used Filezilla to upload one file, then disconnect, reconnect, upload the second one, disconnect, reconnect, etc… for all 72 files ? Probably more than the 10-20 seconds you mentioned.
So to summarize, the issue you’re describing (which is definitely an issue and which I plan on fixing very soon) is not about “asset upload is slow”, but rather on “no mass-upload optimizations”, which is why I said it’s unrelated to this issue.
This specific issue was that it would take 1 to 3 seconds to process on the server side a single upload, and the more assets you had, the more time it took to process it. Now, you can have 0 assets or a million assets, it will always take the same amount of time, less than 10 milliseconds to process it (add it to your assets library, create the parent folders if they don’t already exist, update your total used quota, which is what took seconds before). That’s independent of the time it takes to establish the connection and upload the actual file content.
Are you saying that gap will close significantly with new features?
Yes, it should become much faster. I have a bunch of optimizations planned to make it faster, there’s always going to be the cost of having to do the upload from the server to the S3 storage, but I have been working on ways to try and optimize that as well.