All posts by kit

influxdb cloud setup

– create cloud server
– connect with cli client
– create user:
create user kit with password ‘good password’ with all privileges;
– auth
– create database cooldb
– create user to write data points:
create user dbwriter with password ‘good password’;
grant write on cooldb to dbwriter
– create user to read for graph making
create user dbreader
grant read on cooldb to dbreader
– send test datapoint
curl -X POST ‘https://server.influxcloud.net:8086/write?db=cooldb&u=dbwriter&p=good password’ –data-binary ‘test value=1000’

Using wp tool to update wordpress no index

wp post meta update _yoast_wpseo_meta-robots-noindex 1

You can use url_to_postid to convert a URL to its post id in the wp shell, or by creating a wp command.

for url in $(cat /tmp/urls-to-noindex); do
id=$(wp url2id $url)
if [ "$id" == "0" ]; then
echo $url - no post found
else
wp post meta update $id _yoast_wpseo_meta-robots-noindex 1
fi;
done

Create scaffolding for a command with wp scaffold plugin.

command.php snippet:

$url2post_command = function($args) {
if (count($args) == 0) {
WP_CLI::fail("no url");
return;
}
$post_id = url_to_postid($args[0]);
echo "$post_id\n";
};
WP_CLI::add_command( 'url2post', $url2post_command );

*BONUS* how to get post publish date:
wp post get –field=post_date
(or post_modified)

Converting a PDF to JPG and back again, without looking terrible

Sometimes you need to convert a pdf to jpg, mess around with it, then recreate a pdf. Here’s the method I use:


convert -density 400 file.pdf -alpha remove file.jpg # this create file-0.jpg, file-1.jpg, etc. Between 300-600 is a good density, alpha remove is needed if there is transparency in the pdf
gimp file-0.jpg
convert -units PixelsPerInch -density 400 -quality 25 $(ls -v file-*.jpg) file.pdf # gimp usually uses a dpi of 72, so need to change resolution back to 400, a high quality = large file

That’s it! Almost. If there’s transparency in the pdf, jpg won’t like it so you’ll need to use png. But PNG doesn’t automatically add numbers, so you need to do:


convert -density 400 ~/Downloads/PBH.PDF pbh-%d.png

Sometimes you need to convert the pngs to jpg before making a pdf. YMMV.

WordPress xmlrpc with a different Host header in Python

If you are ever in a situation where you need to run an xmlrpc request against a specific WordPress server using the python xmlrpc library, it can be somewhat difficult. The Python xmlrpc library doesn’t give you an easy way to override the Host header in the request, so you can only pick the server or the Host, but not both. Luckily, the library allows you to override the HTTP transport class it uses, so you can provide your own. Here’s a transport that seems to work for connecting locally with any given host. It would be fairly easy to modify to connect to any server.

class LocalTransport(xmlrpclib.Transport):
def make_connection(self, host):
self.real_host = host
return xmlrpclib.Transport.make_connection(self, '127.0.0.1')

def send_request(self, connection, handler, request_body):
try:
import gzip
except ImportError:
gzip = None #python can be built without zlib/gzip support
if (self.accept_gzip_encoding and gzip):
connection.putrequest("POST", handler, skip_host=True, skip_accept_encoding=True)
connection.putheader("Accept-Encoding", "gzip")
else:
connection.putrequest("POST", handler, skip_host=True)
connection.putheader("Host", self.real_host)

Telling Yum to Keep a Certain Kernel Version

VPS providers only support a set of kernels. Yum updates can sometimes remove old kernels, which can cause problems if the VPS provider doesn’t support the newer kernels yet. Yum supports pinning a certain kernel, so you can be sure that the system can boot after an upgrade and have an orderly kernel upgrade when the time comes.

From this article:

# prevent yum from deleting old kernel
yumdb set installonly keep kernel-core-3.17.4-301.fc21.x86_64

# allow yum to delete old kernel
yumdb del installonly kernel-core-3.17.4-301.fc21.x86_64

Mysql Replication

I’ve been working on creating a new master-slave setup out of an existing monolithic database server. I’ve been following Digital Ocean’s guide to replicatoin, but have had to figure out some other steps and caveats.

Random notes:
* relay-log needs to be defined
* if the slave gets into a bad state, you can use ‘RESET SLAVE;’ to get it healthy.

Here are those steps:

  1. create the backup on the original server: mysqldump --all-databases -u root -p > ./full-backup-2015.02.03-bak
  2. import it on the new master: cat full-backup-2015.02.03-bak | mysql -p
  3. re-export it with the “master data”: mysqldump --all-databases --master-data -u root -p > full-backup-2015.02.03-bak
  4. modify the master data in the backup to include the user/password that will be doing the replication
  5. make sure ansible ACTUALLY adds the replication user with the correct host
  6. run START SLAVE on the slave
  7. if ansible user creation makes it so there are slaves users on both servers, do: stop slave; SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1; start slave; to skip it.