Useful Addons

Every Mozilla/Thunderbird-user knows But good extensions are rare. And only SOME of the addons are useful. And for our daily work we need only good and useful extensions.

In the winter holidays I’ve had time to look at many addons. And found some things, which might be useful for our daily work. Maybe you will have a look at them.


Sildes from „Googol Records (with MySQL)“-Session

IPC is over. My impression: The place was too big making it a little bit difficult to get in contact with others. Yet, from a technical and gastronomical point of view the Rheingoldhalle was a good choice. For the next IPC I would recommend to anker a hotel-ship near the hall (the Rheingoldhalle is situated at the bank of the river Rhine) to avoid a 30min shuttle bus ride from and to the hotel. ;-)

But back to my talk there.

The initial idea to this session was a performance consulting in spring this year: For a table with appr. 250 billion entries I found a way to store and read about 6,000 queries per second! I applied some very unusual ways to speed up a problem by factor 1,000 or 2,000 just by thinking about how I would do it, if I had to store the things in my home supposed they were real things such as cutlery.

I found out, that there are some patterns, which can be used in general and that they work for nearly every problem with very big tables. Just see for yourself how I solved the problem.

Please note: The slides could probably not be understood without explaining some of the ideas. Be free to post your questions as comment!
PS: The „C“ in the pictures is the „catalog“.
PSS: YES, we’ve uploaded a new slideshare, the pictures are now working.

Googol Data

View SlideShare presentation or Upload your own. (tags: performance database)

Is MySQL-partitioning useful for very big real-life-problems?

Some months ago I helped out in another project in which they had some performance problems. They had a very big table and the index of the table was bigger than the table itself. As every change in the table causes MySQL to recalculate/reload the index, this can take some seconds for such big tables.

So, I thought it would be a good idea to split that big table into very small ones. This should reduce the overhead of reloading big indices and instead reload only very small parts. And the next thought was: Is it possible to use the „new“ MySQL-Partitioning for that? Weiterlesen