Support #721
bad performance with many channels and large history
Status: | Feedback | Start: | 06/08/2012 | |
Priority: | Normal | Due date: | ||
Assigned to: | - | % Done: | 0% |
|
Category: | - | |||
Target version: | - | |||
Votes: | 1 (View) |
Description
<xnox_> I am not happy <xnox_> smuxi was eating 100% cpu on my server <xnox_> and reconnecting to server was painfully slow upto 10 minutes to load up the channel user lists & the backlogs <xnox_> and I only had something like 20 channels open * xnox_ maybe fiddled with persistent storage settings too much <xnox_> overall I'm using xchat right now <xnox_> should i be using daily PPA or will that not make a difference <Cobrian> Sounds like a bug to me <Cobrian> I don't think meebey has fiddled with the server side too much lately <Cobrian> Was the 100% CPU condition on from server start or did it appear after extended use? <Cobrian> (how long was the server component on before you had problems?) <xnox_> like a few weeks <xnox_> i killed and restarted the server <xnox_> reconnecting from the client cause it to go into 100% cpu again <xnox_> and taking forever to load the backlog. <xnox_> Cobrian: how long does a reconnect to the server take for you? (and reload all the channels) <Cobrian> Uhh, at 9 channels currently, with a 50k persistent buffer, maybe a minute with my 100Mbps line? <Cobrian> Haven't really timed it, fast enough for me to not really mind <Cobrian> Oh right, and it's a bit slower than that since I only have g-WLAN, so 54mbit maximum <Cobrian> I doubt such bandwidth is really even required, it's more about parsing the buffers at both ends, maybe <Cobrian> I remember how meebey spent several weeks just making sure he had squeezed as much speed out of the parser as possible <xnox_> well I have 100Mbps & 50k persistent buffer and it takes on the range of 15-20 minutes to get all the channels & backlogs <xnox_> I have about 20 channels <xnox_> something is not right, maybe my server is throttled? <Cobrian> Might be, shouldn't take that long <Cobrian> Is it a physical server or a virtual one? <xnox_> ec2 micro <xnox_> virtual <xnox_> how to migrate servers correctly? <xnox_> smuxi server that is <Cobrian> Hmm. The connect phase does use up some cycles, but I'm not familiar with cloud farms to know how badly they start throttling cpu use if they detect a sudden spike <xnox_> $ du --si -s .local/share/smuxi/* <xnox_> 151M .local/share/smuxi/buffers <xnox_> 60M .local/share/smuxi/logs <xnox_> and the server has 5Mbit/s symetric link or so <Cobrian> Copying those over should be enough, although I might consider clearing the buffer dir and deleting the original ini file <xnox_> if I have to redownload *everything* every single time that's bad. <Cobrian> And setting it up again <xnox_> I see no local artifacts, so does it not cache locally and synchronise the delta with the server? <Cobrian> No local caches <xnox_> that means I should move my server to LAN, but that will suck when I go away to a conference <Cobrian> Set your scrollbacks to be shorter, that might help <xnox_> which one of the settings? cause I still want full logs, at least on the server.... but then notifications will be wrong =( <Cobrian> Buffer lines <Cobrian> That's the amount the client will download on connect <xnox_> It was exceptionaly useful to suspend, move to new meeting room, resume and get the messages across during the UDS <Cobrian> At some point there should be a system which will download more scrollback when you scroll past the local client cache <Cobrian> But that's still in development I think <xnox_> yeah something like http://www.smuxi.org/issues/show/591 but on steroids <xnox_> do last bandwidth connection, and then start sync up <Cobrian> There should be a ticket for it... <xnox_> but I don't understand the reasons for not downloading / keeping historic cache locally <xnox_> apart from 'not developed yet' <xnox_> =) <Cobrian> It's kept in memory I believe, at least my current backlog is loads longer than the 2000 I have my buffer set at <Cobrian> As long as you don't quit the client, it should just delta <xnox_> but I do want to quick my client =/ <xnox_> s/quick/quit * xnox_ does reboot testing <Cobrian> But the buffer type labels in the preferences are a bit unclear <xnox_> of kernel/filesystems/installer etc. <Cobrian> Well, that's what you get for running stuff on a testbed :D * xnox_ only has one machine =((((( <xnox_> and no VM is not bare metal testing <Cobrian> Get a xenclient base and do two VM's on your workstation machine <Cobrian> Xenclient is as close as <Cobrian> Especially when you can pick which VM gets hardware level access * xnox_ works on linux and doesn't like citrix name... <Cobrian> I tried it, only reason I didn't continue was that my fingerprint reader didn't work and the fact it kept doing weird artefacts on screen sometimes <Cobrian> Xen stuff is basically a minimal linux that runs the vm base layer <xnox_> http://www.smuxi.org/issues/show/685 ? <Cobrian> Yeah, that and just wayback scrolling, first to engine buffer and then over to logs, even <Cobrian> There's been talk some time back but I guess meebey just hasn't found a good way to bring it about <xnox_> so right now my option is to move the server to LAN or to continue using xchat, which is actually very nice <xnox_> and I am not going to use irssi <Cobrian> Well, yeah, unfortunately, unless meebey is lurking and decides to help you debug the server side, because I'm still convinced it's either a bug caused by you doing the move instead of installing a new engine from scratch alltogether, or a problem caused by EC2 <xnox_> i never moved the engine <xnox_> i want to move it now, due to performance
History
Updated by Mirco Bauer 4581 days ago
This sounds like an issue with the persistent message buffer which is stored in the db4o database. I am working on a new message backend which will be leveldb based and should use much less resources, memory and CPU wise. See #717 for more details.
Updated by Mirco Bauer 3155 days ago
- Status changed from New to Feedback
Smuxi uses SQLite now, can you re-test and say if the situation improved? Our benchmarks showed SQLite is much faster.