Ah yes, good point. I’ll wait a bit and then see how it goes.
Yes, I suspect you are correct. I did search for issues previously on Stretch and mysql, but to no avail. I’ve got an excellent set of leads from this thread, and I’m still working my way through them as time permits, so I’ll see if perhaps postgresql can help, just as a reference point. I’ll also test the git version of tt-rss and other suggestions.
I tested mine and got the following:
Buffer pool size 12799
Buffer pool size, bytes 209698816
Free buffers 8046
But I think that’s just another way of looking at my output from my “percentage of buffer in use” command above, which now gives 36.72 % in use. Here are a few other possibly pertinent sections from SHOW ENGINE INNODB STATUS\G, but I wasn’t sure what are reasonable values.
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 96934
OS WAIT ARRAY INFO: signal count 92154
Mutex spin waits 105554, rounds 2557628, OS waits 83528
RW-shared spins 9152, rounds 270858, OS waits 8672
RW-excl spins 1806, rounds 138869, OS waits 4448
Spin rounds per wait: 24.23 mutex, 29.60 RW-shared, 76.89 RW-excl
FILE I/O
--------
I/O thread 0 state: waiting for completed aio requests (insert buffer thread)
I/O thread 1 state: waiting for completed aio requests (log thread)
I/O thread 2 state: waiting for completed aio requests (read thread)
I/O thread 3 state: waiting for completed aio requests (read thread)
I/O thread 4 state: waiting for completed aio requests (read thread)
I/O thread 5 state: waiting for completed aio requests (read thread)
I/O thread 6 state: waiting for completed aio requests (write thread)
I/O thread 7 state: waiting for completed aio requests (write thread)
I/O thread 8 state: waiting for completed aio requests (write thread)
I/O thread 9 state: waiting for completed aio requests (write thread)
Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] ,
ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
4498 OS file reads, 324801 OS file writes, 131871 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 0.64 writes/s, 0.25 fsyncs/s
Yep, it’s definitely mysql, from checking atop and iotop.
Yes, this was my understanding. Although from everything above, it seems that mysql probably has enough memory anyway, so this issue is probably moot.