Once again, I've been really neglecting content additions to the site, although I have some good excuses; namely I've buggered my wrist up on my writing hand and so sitting here banging out keystrokes hasn't been a particularly appealing prospect, but I've also been working a lot in real life and just haven't had the time anyway.
One thing that I have noticed though is this month something went tits up on the server, the site apparently crashed and the software I use to keep it up and running failed to bring it back to life for reasons I couldn't fathom. I reinstalled everything and discovered that every couple of hours the whole thing gets shut down. I do have a contingency in place to bring it back to life if it's detected to be dead after 15 minutes (good ol' cron) but just exactly what's happening behind the scenes, I can't quite figure out, yet.
I've got a feeling this is something to do with NodeJS, but as I say, I have no idea what exactly. I've tried rolling back software versions and all it did is extend the time before the site dies to four hours. It's all up to date now, so is back to copping it every two hours again, but I've altered the cron job to do its checks every minute so if you do come along to the site and find it chucking out a 503 server error, wait one minute and it'll come back to life.
The server issues may now be fixed. It appears my host added and simplified some things in the background that I had previously had to do myself, so best guess it was probably conflicting with what I already had installed. The site may still go down at some point but I'm not sure if the new setup will automatically restart it or not since I'm no longer using a cronjob to check if it's down, either way I'll keep my eye on it and correct things accordingly. Hopefully this is the start of a much more stable future for XenoDyne.