Advanced Memory Management, what is the idea/aim behind that?
Well that is a good question, the advanced memory management is more about setting boundary conditions on how much BOINC and related processes are allowed to use.
We still get a few reports of BOINC causing systems to become unresponsive or sluggish. Most of the investigations we have done revealed a machine that was paging a lot during the times BOINC was running. Paging is the process the OS uses to free up less frequently used memory to make room for active tasks by writing those pages of memory to disk. Each page of memory is roughly 4KB in size on a x86 processor.
So lets say you are running a machine with 512MB’s of memory. Windows XP uses roughly 128MB of that on boot-up and will allow parts of itself to be paged out to disk. The last round of virus scanners I looked at want around 100MB of memory, the little system tray icons in the lower right part of your screen generally take about 5MB a piece, with the notable exception of the various IM clients which have bloated out to 20-60MB a piece. Any additional programs running on your machine such as a web browser or email client can take anywhere from 20MB up to 100MB.
When the OS comes under memory pressure it starts looking for chunks of memory that haven’t been touched in awhile and writes them out to disk and then loads something into that chunk of memory that is more relevant.
So let us say that you are attached to R@H and you walk away from your computer for an hour or so, during that time R@H has used over 256MBs of memory continuously for at least 30 minutes and the OS has had to page a lot of stuff to make room for it, including itself. You start menu has to be reread from disk or whichever application you happen to be using before you left. All of that paging takes a few moments and makes your computer feel really really slow.
With the introduction of this feature we hope we can finally close one of the last remaining loopholes to user responsiveness.
Right now we have the following two settings planned:
- Percentage of memory use while user is active.
- Percentage of memory use while user is idle.
What should happen is that BOINC will detect how much memory is installed on the machine, and every 10 seconds or so looks at how much memory a science application is using. If a science application exceeds the total allotment BOINC will shut it down and look for another application to schedule.
I’m really looking forward to this feature since my 2GB machine uses about 1.2GB of memory without BOINC even running and I have four processors to feed. Up until the middle of last year I only had 1GB in my machine and if I had BOINC running it was pretty painful when BOINC rescheduled all the science applications on the machine while I was working.
Scheduler Improvents (already implemented?) how do these help ?
As far as I know John Mcleod has finished the work on the new scheduler and work-fetch policy. The new system should reduce the number of wasted cycles lost between the last checkpoint for an application and when it needed to quite due to a reschedule to honor resource shares.
John is really the wizard in this area.
How are any other improvement going to improve us? and the projects?
I believe the two major work items over the next year will probably be the inclusion of the projects to be able to use torrents in their file download process and the ability for projects to be able to send out optimized science applications for each processor type and possibly GPU enabled applications.
Is there anybody working on boinczilla? Bug reports are raising and nobody sort it out :/
My bad, I’ll see what I can do about that this weekend.
Why not run the benchmark at higher priority, so each system produces a constant value, rather than the haphazard, particular as occurring only every 5 days?
The idea behind running the benchmarks at the same priority level as the science applications is to get a rough idea how how many cycles the science applications will get. If you run the benchmarks at a normal thread priority it won’t be that much more consistent, and if you run them at the highest thread priority a user mode application can have you’ll get numbers that are not very realistic for a science application running as an idle process.
The systems are benchmarked every 5 days or so to handle changes to the environment, such as a more resource intensive virus scanner or any content indexing systems that might have been installed.
When are we going to see the first alpha/beta with the BSG?
Hopefully next week.
With regard to the idea of switching tasks at a checkpoint, what happens (as in, are there any checks etc) when an application gets “stuck” and doesn’t make any progress? This also applies to a similar situation with current apps, where they get stuck and the clint tries and tries to get it done by the aproching deadline, but obviously never will. This pushes the client into NNW and EDF. Will BOINC abandon the unit if no progress is made, or the deadline is met?
To be honest, I don’t know. I’ll have to bug John and David about that.
Is there any possibilty of releasing 5.6.4 or 5.6.5 as alternate versions?
I don’t intend to put them on the download page. But if you feel comfortable with the quality of the client that you feel you can recommend people to use it, then go ahead and give them the link. I think we were far enough along in the testing process to know it isn’t going to cause any major problems and might have only a few small bugs left before it was ready to be released.
The reason for not adding it to the download page is then people would receive a message in the message long requesting they upgrade to it. If all goes according to plan we’ll be able to release 5.8 in a few weeks, and it would be a bad experience to bug people about upgrading twice in one month.
I suspect that if somebody was experiencing a bug that is fixed in 5.6 they would be happy to start using it now and not be so annoyed when they see the upgrade notice for 5.8.
Is there any chance of a purge function being implemented?
I haven’t heard any talk of one. I’ll bring it up with David, it sounds like something a project might want.
Hot topic: Why is the hourly benchmark value between Linux and Windows different, or it’s claimed. When done with stock BOINC 5.4.9 e.g. on Windows it kicks out 8.1 per hour, when same done under Linux, it kicks out 5.0. The WU’s are processed at equal speed i.e. a job on Wondows taking 2 CPU hours would take near equal time on Linux.
It has been my experience that the Microsoft compiler has been better at optimization than the GCC compiler. I’m sure I’ll get flamed by the OSS crowd but most of the projects are experiencing the same result.
I should point out that the optimizers have been able to equal things out by a lot of trial and error by turning off and on the various optimization switches for GCC.
If the optimizers want to submit a patch that contains different non-CPU specific optimizations I’m sure we could use them.
To submit questions for next week just click on the comments link below and submit your question.
Thanks in advance.