We’re rolling out monit on our new platform at the request of a vendor to manage their new service. I’ve always been dead against these kinds of automated failure recovery tools as they often require human intervention after the fact anyway and all the platforms I’ve managed will have failed the server anyway so why not restart the services after the root cause analysis is done? My tune is slowly changing though and I’m coming to appreciate this method of systems recovery a lot more.
Whilst playing with it though I got the following error
<br /> root@newshiny:~# monit summary<br /> monit: error connecting to the monit daemon<br />
What what? The daemon’s definitely running, why can’t I pole it’s status?
<br /> root@newshiny:~# ps aux | grep monit<br /> root 325293 0.0 0.0 16440 1276 ? Sl Mar27 0:10 /usr/bin/monit<br /> root 496627 0.0 0.0 105348 832 pts/0 S+ 11:43 0:00 grep monit<br /> root@newshiny:~# service monit status<br /> monit (pid 325293) is running...<br />
After reading the documentation this monit: error connecting to the monit daemon seemed to be an epic case of rushing in to things, skimming the documentation and PEBCAK!
Solving monit: error connecting to the monit daemon
Monit can present an HTTP interface which I didn’t enable as I thought it was just for me, it turns out it’s also for the command line tools!
It’s really easy to enable, in /etc/monit.conf or wherever your conf file is located just add
<br /> set httpd port 2812 and<br /> use address localhost<br /> allow localhost<br />
and restart monit with
<br /> service monit restart<br />
and Bob’s your mother’s brother.
<br /> root@newshiny:~# netstat -lpn | grep 2812<br /> tcp 0 0 127.0.0.1:2812 0.0.0.0:* LISTEN 325293/monit<br />
`
root@newshiny:~# monit summary
The Monit daemon 5.2.5 uptime: 19h 18m
Process 'shiny_manager' running
Process 'shiny_proxy' running
Process 'shiny_server' running
System 'system_newshiny' running
`
Problem solved!
