Can you get a screen snap of what you are seeing? Of course it would be better still to have one of how it used to look, but if you can just 'splain what I'm seeing in the screen snap that would help.
That's 67MB. I can't imagine what image could be that big. When you say popping up, what does that mean exactly? You mean while browsing media, upon entering a screen where you will be browsing media, showing up in the logs, when doing a preview popup?
Actually, that's 64MB. The only place where a system memory buffer is used with a max size of 64MB is in the base media repo driver, when the client service queries the metadata. That's just a maximum size of course, not the actual bytes allocated. It uses a system buffer so that it can grow efficiently because it's not known ahead of time how much memory will be needed to create the blob of data that holds the flattened out metadata.
In order for that to fail, it would mean that it doesn't have 64MB of virtual memory space to allocate, which would be weird. It doesn't have to be backed by real memory or anything. And it's only used for a short period of time.
But, I'll lower it to 24MB, which based on your experience of 13MB, should be enough to hold a really, really big repo metadata set. It's fine line to walk. We have to be able to handle the worst case scenario, but don't want to use
Still, let me know where exactly you are seeing the error and from what, and get me a log dump, so that I can be sure I understand the source of the issue.
07-12-2014, 12:55 PM (This post was last modified: 07-12-2014, 09:02 PM by Dean Roddey.)
OK. I'm just punting on the whole thing. I'm doing something I probably should have done long ago, which is to create a 'chunked output stream' class. That can be used any time there's a potentially large amount of data to stream out to memory. It doesn't have to commit to anything up front since it stores the data in a list of 'memory chunks' of 1MB each. It just allocates another one as required.
So it has the benefits of a system buffer, efficient expansion without having to allocate it all up front, but without the issues of system buffers which have to be fit into virtual memory and which still have to set some max size up front, which makes the problem worse. This new type of output stream does let you set a max size, but it's not anything that has to be actually available, it's just a worst case scenario so that something that went out of control wouldn't eat up all the memory.
I'm updating the media system to use this new type of output stream, which will get rid of the need for these system buffers, and avoid the whole problem.
I've basically got it done, but want to do more testing, since this isn't a trivial change. So I'll get a new drop out tomorrow mid-day. It should certainly be more efficient now, memory-wise.
I just ran an end to end test and it all appears to be working. But that's all I can take for today. I need to watch a movie or something. I'll pick it up tomorrow and this should take care of the remaining issue.
Hey, it's taking longer than originally thought because I decided to just go ahead and do it right and get rid of the use of system buffers, except in those places where the API requires system allocated memory.
There were a number of other places where a similar sort of issue was involved, such as handling graphics files (which can be quite large.) And really getting rid of them required providing both input and output chunked streams, and yesterday I only did the output half of it. So I had to do the inputs today, and write more unit tests, which took most of the time.