It is time to tell us what you think. We are conducting a poll to determine where the hot spots are for what needs to happen with BOINC. We welcome all kinds of feedback, the more people that respond and the better coverage we get, the more we can improve BOINC and help the projects improve their overall experience.
I turned on my TV this weekend to catch up on some of my recordings and I found this in my wait recorded queue: Rosetta Presentation
I have my media center setup to record any of the Computer Science Colloquium from the University of Washington that comes on UWTV. It happens to be David Baker of R@H giving a presentation to the computer science students about how Rosetta works and how they use the results. He even gave BOINC a plug and discussed how R@H was changing how they do things.
This weekend I’m going to try and get a register dump of each thread added to the diagnostic output.Along with that I would like to get the function pointers and function parameters added to the diagnostic output.
I did manage to shrink the PDB file size for R@H down to 7MB which still seems to be a little steep for mass consumption.So maybe with the function pointers and parameters we can continue to bring down the error rates.
In order to gain ground I need to be able to see where the program is stuck on the destination machine.There are three ways to do this:
1.Have the community report which workunit stalled on there machine and attempt to reproduce it.
2.Hook up a debugger on the target machine and have the person at the keyboard create a dump file of the process.
3.Introduce a trigger into the executable so that on a certain action it causes it to dump its own backtraces.
Option one proves difficult just in managing the sheer number of workunits to look at.Roughly 550 workunits a day are being aborted or have exceeded their allotted CPU time.R@H hasn’t been able to reproduce the problem in the lab with the workunits they have looked at and are continuing to look at.
Option two doesn’t scale very well, namely of all the people who are hitting this problem only small fraction of them know how to create a dump file with a debugger and only a small fraction of them are willing to spend the time to compress and break the 200MB to 350MB file into smaller pieces to email them to me so I can look at them.Then of course there is only one of me and I still have all my other BOINC work to do, like fixing bugs in the 5.3.x clients so we can ship 5.4.0!
Option three didn’t hit me till Monday night.As part of the feature work we did for CPDN we introduced a way for the core client to notify the science application that it was being aborted so it could clean up after itself.Well I completely forgot that the 5.2.x clients don’t send the abort command to the client when I burned the midnight oil to deliver the backtrace functionality for R@H 4.94.At 4am I had the functionality working for Windows and checked it in.
Fast forward to today.I went looking through the results on Ralph@Home and discovered that the backtraces were not being logged like I thought they should have been.After further investigation I realized that the 5.2.x clients were sending the quit command instead of the abort command.Talk about killing morale.I have posted in the Ralph@Home forums that people should upgrade and I’ve been seeing results come back with 5.3.28 which is good.I’m just not sure when I’ll have enough information about the bug.
We are pretty close to having 5.4 ready for public release.I believe in a week or less.But a big problem remains, typically it takes a few months for a new stable client to reach a high enough level of adoption that patterns emerge that can be tracked.
After some discussions with David Baker we are going to drop the maximum amount of time allotted for a workunit to run on a machine.That’ll keep a good chunk of the wasted CPU cycles down.I am also selling the idea of releasing the PDB file with the Rosetta application for the public project.Now granted, it is a 30MB file.Without it none of the diagnostic stuff built into the BOINC API for tracking down bugs will work.Isn’t a 30MB insurance policy for an abort or crash worth it if the project can get something useful out of it which will lead to bug fixes?
Results on RALPH@Home which is R@H’s alpha project have been very promising.
To give an idea about how large this problem was for R@H I guess I need to provide some numbers.So here goes:
R@H receives roughly 115k results a day.
Roughly there are 16k failures a day.
Of those 16k failures a day, 5.5k fell under the ERR_NESTED_UNHANDLED_EXCEPTION_DETECTED and 0xc0000005 banner.Those are the two error codes used when something really really really bad has happened on Windows.There are another 1.5k errors that have cryptic Windows error codes which may or may not be related.
Now how does this translate to RALPH@Home?Well if you work under the assumption that RALPH@home is a mini R@H, then the percentages should be roughly the same.
That said, sure enough RALPH@Home had roughly the same breakdown of errors that the public project had.Here are some rough stats for RALPH@Home:
RALPH@Home receives roughly 1k results a day.
Before 4.93 was released for Beta the failure rate was 150 or so a day.
Now with 4.93 in the mix it has dropped to 100 or so a day.
Keep in mind that the Mac and Linux clients have not been updated yet and so there error rates remain unchanged.
RALPH@Home went from a 25% failure rate down to a 12% failure rate.Now if you remove the results from Linux and the Mac the failure rate for the Windows client is floating at 5%.
I’ll include the current error rates in the public project and RALPH@Home below.
Now I’m on to the next biggest problem which has been deemed the ‘1% bug’.
For those who noticed the error code 1 in the charts below, that error code is given when Rosetta could not find something in one of the pre-staged files downloaded to your machine or when the application felt something really bad has happened and it couldn’t continue.With 4.82 that actual error data was being written to a different log file than the one BOINC sends back to the server.Starting with 4.94 the reason for the application quitting will be logged and sent back to the server in a way that can be easily tracked and fixed without having to write the workunit names in the forums.
So, as many of you probably already know, I’ve been brought onboard as a consultant with the Rosetta@Home project. A big issue they were experiencing was related to random crashes when BOINC would notify them that it was time to quite and for another application to begin.
I believe I have found and fixed this style of bug, but alas only time and testing will tell.
To understand this bug I need to explain how things work with a science application. When a science application starts and notifies BOINC that it supports graphics three threads are created to manage what is going on.
The worker thread is the heavy lifter of the science application, it handles all the science. The majority of the memory allocations and de-allocations happen in this thread.
The graphics thread is responsible for displaying the graphics window and for hiding and showing the window at BOINCs request.
The timer thread is responsible for processing the suspend/resume/quite/abort messages from BOINC as well as notify BOINC of trickles.
Now when the science application received the quit request it would call the C Runtime Library function called exit which is supposed to shutdown the application. Part of this shutdown operation calls the Win32 API called ExitProcess. ExitProcess would let the threads continue to run while cleaning up the heap, which is a holdout for letting DLLs decrement their ref counts and unload themselves if nobody else is using them. Well there in lies the problem, the worker thread was still running trying to allocate and de-allocate memory from a heap that has been freed by ExitProcess.
This in turn would cause an access violation which shows up in the log file as 0xc0000005.
Science applications now have the option of requesting a hard termination which stops all executing threads and then cleans up after the process. In essence the application calls TerminateProcess on itself. What this also means is that the application has no chance of writing any more information to a state file or checkpoint file when the BOINC API hasn’t been notified that a checkpoint is in progress. Use with care. It also means that BOINC should no longer believe that a task is invalid from a random crash.
I believe this will take care of quite a few ‘crash on close’ style of bugs. What was really annoying about this kind of bug is that it crashes in a different location each time. Sometimes it would crash in the timer thread and sometimes in the worker thread. A good chunk of the time the clients would report an empty call stack which doesn’t give us anything to work off of.
This style of bug would affect slower machines more than the faster machines. The bug wouldn’t surface if the timer thread could finish all the CPU instructions needed from the time exit was called to the time ExitProcess actually kills the threads in one OS thread scheduling cycle.
I think Rosetta@Home hit this bug more often then most projects because of the amount of memory it allocates while doing its thing. 150MB’s per process. That was just enough to get it to happen on my machine if I left it running for 10 minutes and the graphics running.
It looks like both Einstein@Home and Rosetta@Home are going to be testing this out in the next few days. I’m excited to see what this change does for the success rates of the tasks being assigned to client machines.
Well tomorrow I’ll be taking a trip to the Rosetta@Home project.
They are going to be explaining how Rosetta works so I can try and help them out with the problems they are having with the BOINC interface code. I believe it’ll be a great learning experience for both Rosetta@Home and BOINC.
It seems everytime we learn about a new project, there is another way of doing something that is just slightly different from any other project.