Made a little progress on the slow pages issue.
While about to start coding my C optimized database stats server, I began thinking what IPC protocols I'd implement, and how to design the functions relevant to what I need right now... then it hit me.
Primary key look ups, properly indexed will take O(log(n)log(n)), as will similar updates.
However, COUNT(*) operations, on a constantly changing table, have no better optimization than O(n) . The entire list must be traversed every time.
When you consider 30,000 rows requiring 30,000 comparison operations, repeated some 500,000 times daily... it's no wonder the CPU usage of the MySQLD has been so high as of lately.
Perhaps the better solution is to properly design cached tables with the same information, calculating it once and using for the entire day. Rather than calculating it on the fly.
I also wrote a simple script to check into the process list of MySQLD. I'd love to catch it at a stall point, but for now it's a game of chance that so far has confirmed one thing I know.
Locked rows stall a lot of reads.
Currently using a memory table. May consider moving back to InnoDB. I'll have to review the locks supported / used on memory table though.
Also reduced some redundant queries on character pages.
And implemented bin-log-sync. Less to worry about next server crash.
hah, and our London Server - what a joke. so much down time. so many crashes. i might move to another data center. somewhere with a better track record as of late.