http://www.brendangregg.com/blog/2016-12-21/linux-tracing-in-15-minutes.html
Using Raspberry Pi as Remote
notes:
lirc_rmi parameters MUST be set in /boot/config.txt, modprobe options are ignored.
https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/developing-an-alexa-skill-as-a-lambda-function
LIRC:
http://www.raspberry-pi-geek.com/Archive/2015/10/Raspberry-Pi-IR-remote
http://www.raspberry-pi-geek.com/Archive/2014/03/Controlling-your-Pi-with-an-infrared-remote
http://alexba.in/blog/2013/01/06/setting-up-lirc-on-the-raspberrypi/
https://www.raspberrypi.org/documentation/usage/gpio/
Adding Mysql Replication On New Server
Update: relay-log.info no longer exists, you need to use SHOW MASTER STATUS;, see https://dev.mysql.com/doc/refman/5.7/en/replication-howto-masterstatus.html
How to set up a new Mysql replicator, based heavily on this blogpost
– Rsync database from existing secondary server (source):
rsync -Sa --progress --exclude=mastername* --exclude=master.info --exclude=relay-log.info /var/lib/mysql/* new_server:/var/lib/mysql
– pause replication on original secondary server
mariadb> stop slave
– cat relay-log.info
./mysqld-relay-bin.000875
208022
mysql-bin.000294
445713389
2
4
– re-rsync
– resume replication on original secondary server
mariadb>start slave
– disable replication on new server by adding ‘skip-slave-start’ to my.cnf
– start mariadb on new server
– set relay bin and mysql bin data on new server
# mysqlbinlog --start-position=208022 ./mysqld-relay-bin.000875 | mysql -p
mariadb> CHANGE MASTER TO MASTER_HOST='
– start replication
mariadb> start slave
– check replication status
mariadb> show slave status\G
– remove skip-slave-start from my.cnf
– enjoy!
iSCSI
Creating a target: https://www.certdepot.net/rhel7-configure-iscsi-target-initiator-persistently/
1) install targetcli package
# yum install targetcli
# systemctl start target
# systemctl enable target
2) start targetcli: http://linux-iscsi.org/wiki/Targetcli
# targetcli
3) create filebacked target
> backstores/fileio create lun0 /opt/lun0.img 100M
User-backed targetS:
https://github.com/open-iscsi/tcmu-runner
3) create user target
> backstores/user:hello create lun0 100M unneeded
4) set iqn
> iscsi/ create iqn.2009-11.com.kitwestneat:t1
5) export LUN
> cd iscsi/iqn.2009-02.com.ddn:t1/
> tpg1/luns create /backstores/user:hello/lun0
6) create ACL
> tpg1/acls create 2009-11.com.kitwestneat:client
7) on initiator:
# iscsiadm –mode discovery –type sendtargets –portal 192.168.10.3
# iscsiadm –mode node –portal 192.168.10.3 –targetname iqn.2009-11.com.kitwestneat:t1 –login
8) test it
[root@localhost ~]# dd if=/dev/sda bs=100 count=5
Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello world!Hello 5+0 records in
Gluster use case:
Gluster Solution for Non Shared Persistent Storage in Docker Container
Gluster as swift backend
https://github.com/openstack/swiftonfile
UAS error
Oct 1 18:32:37 wacko kernel: [257893.680902] sd 6:0:0:0: [sdc] tag#0 uas_eh_abort_handler 0 uas-tag 1 inflight: CMD IN
Oct 1 18:32:37 wacko kernel: [257893.680907] sd 6:0:0:0: [sdc] tag#0 CDB: Read(10) 28 00 1d 1c 58 80 00 00 08 00
Oct 1 18:32:37 wacko kernel: [257893.680957] scsi host6: uas_eh_bus_reset_handler start
Oct 1 18:32:37 wacko kernel: sd 6:0:0:0: [sdc] tag#0 uas_eh_abort_handler 0 uas-tag 1 inflight: CMD IN
Oct 1 18:32:37 wacko kernel: sd 6:0:0:0: [sdc] tag#0 CDB: Read(10) 28 00 1d 1c 58 80 00 00 08 00
Oct 1 18:32:37 wacko kernel: scsi host6: uas_eh_bus_reset_handler start
Oct 1 18:32:37 wacko kernel: [257893.847751] usb 4-2: reset SuperSpeed USB device number 2 using xhci_hcd
Oct 1 18:32:37 wacko kernel: usb 4-2: reset SuperSpeed USB device number 2 using xhci_hcd
Oct 1 18:32:42 wacko kernel: [257898.858136] usb 4-2: device descriptor read/8, error -110
Oct 1 18:32:42 wacko kernel: usb 4-2: device descriptor read/8, error -110
Oct 1 18:32:42 wacko kernel: [257898.959042] usb 4-2: reset SuperSpeed USB device number 2 using xhci_hcd
Oct 1 18:32:42 wacko kernel: usb 4-2: reset SuperSpeed USB device number 2 using xhci_hcd
Oct 1 18:32:47 wacko kernel: [257903.970129] usb 4-2: device descriptor read/8, error -110
Oct 1 18:32:47 wacko kernel: usb 4-2: device descriptor read/8, error -110
Oct 1 18:32:47 wacko kernel: [257904.122216] scsi host6: uas_post_reset: alloc streams error -19 after reset
Oct 1 18:32:47 wacko kernel: [257904.122225] usb 4-2: USB disconnect, device number 2
Oct 1 18:32:47 wacko kernel: scsi host6: uas_post_reset: alloc streams error -19 after reset
Oct 1 18:32:47 wacko kernel: usb 4-2: USB disconnect, device number 2
Oct 1 18:32:47 wacko kernel: [257904.125987] sd 6:0:0:0: [sdc] Synchronizing SCSI cache
Oct 1 18:32:47 wacko kernel: sd 6:0:0:0: [sdc] Synchronizing SCSI cache
Ubuntu bug:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1591521
Similar but not exactly the same:
https://bbs.archlinux.org/viewtopic.php?id=215780
Possibly related to this commit:
https://github.com/torvalds/linux/commit/616f0e6cab4698309ff9e48ee2a85b5eb78cf31a
UAS wiki:
https://en.wikipedia.org/wiki/USB_Attached_SCSI
SCSI EH info:
https://www.kernel.org/doc/Documentation/scsi/scsi_eh.txt
http://events.linuxfoundation.org/sites/events/files/slides/SCSI-EH.pdf
Could try USB passthrough on vagrant:
https://github.com/vagrant-libvirt/vagrant-libvirt
dev.scsi.logging_level:
http://cateee.net/lkddb/web-lkddb/SCSI_LOGGING.html
using docker to build Varnish VMODs
Docker is containers. Let’s use it to create a build environment we can delete after we’re done. Part one is figuring out what we need to build vmods.
I like using Fedora:
mkdir /tmp/vmods
docker run -v /tmp/vmods:/data -it fedora:24 bash
dnf install -y git
dnf install -y autoconf automake libtool m4 perl-Thread-Queue make
dnf install -y varnish-libs-devel.x86_64 varnish
# dnf install -y dnf-plugins-core /usr/bin/rpmbuild
cd /usr/src
# dnf download --source varnish # no longer have to build Varnish from source to compile VMODs
# rpm -ivh varnish*src.rpm
# dnf install -y groff jemalloc-devel libedit-devel ncurses-devel pcre-devel python-docutils
dnf install -y GeoIP-devel.x86_64
git clone https://github.com/varnish/libvmod-geoip.git
cd libvmod-geoip
bash autogen.sh
./configure VMOD_DIR=/data
make
make install
dnf install -y libmhash-devel
git clone https://github.com/varnish/libvmod-digest
cd libvmod-digest
bash autogen.sh
./configure VMOD_DIR=/data
make
make install
So we split that into a docker file and a script to build the vmod:
FROM fedora:24
RUN dnf update -y
RUN dnf install -y git autoconf automake libtool m4 perl-Thread-Queue make python-docutils
RUN dnf install -y varnish-libs-devel.x86_64 varnish
ADD build_vmods.sh /
CMD bash /build_vmods.sh
OUT_DIR=/data
usage() {
echo github URL must be set for each VMOD
exit 1
}
build() {
[ -z "$URL" ] && usage
[ ! -z "$DEPS" ] && dnf install -y $DEPS
src_dir=$(mktemp -d)
git clone $URL $src_dir
cd $src_dir
bash autogen.sh
./configure VMOD_DIR=$OUT_DIR
make
make install
}
DEPS="GeoIP-devel.x86_64"
URL="https://github.com/varnish/libvmod-geoip.git"
build
DEPS="libmhash-devel"
URL="https://github.com/varnish/libvmod-digest"
build
Influx DB Docker
Note on fedora24 with digital ocean: firewalld is the new firewall management daemon for F24, but it’s not installed by default on d.o:
dnf install firewalld
service firewalld start
setenforce 0 # if you want to disable selinux
Also firewalld and docker can conflict, docker needs to be started after firewalld.
Installing influx on docker:
dnf install docker
service docker start
docker pull docker.io/tutum/influxdb
mkdir /var/influxdb
firewall-cmd --add-port=8083/tcp
docker run -d --name influx --volume=/var/influxdb:/data -p 8083:8083 -p 8086:8086 tutum/influxdb
docker logs influx # check that it's running ok
Getting Docker working with ansible:
pip install 'docker-py>=1.7.0' # host side
Docs:
http://docs.ansible.com/ansible/guide_docker.html
https://docs.ansible.com/ansible/docker_container_module.html
https://docs.ansible.com/ansible/docker_image_module.html
Converting EPS Redirects to Nginx Redirects
EPS exports as a CSV, with the fields redirect type, incoming URI, outgoing URL or post ID, number of hits so far. If the incoming URL has commas, it doesn’t properly escape them, so that needs to be checked by grepping for lines with more than 3 commas:
egrep ‘(.*,){4,}’ ~/Downloads/2016-08-24-redirects.csv
The redirects csv should also be checked for spaces, parens, and ? in the URIs, and the spaces should be converted to %20 before running the script, and parens to \(.
eps2nginx.sh:
CSV=/tmp/eps-redirects.csv
for uri_redir in $(cat $CSV | awk -F, '{ print $2"," $3 }'); do
uri=$(cut -d, -f1 <<< "$uri_redir")
redir=$(cut -d, -f2 <<< "$uri_redir")
case $redir in
''|*[!0-9]*) ;;
*) redir=$(wp post list --post__in=$redir --field=url 2> /dev/null);;
esac
echo "rewrite ^/${uri}$ $redir permanent;"
done
ZFS Lustre on Centos 7
Install epel and DKMS:
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install dkms
Install ZFS: (found https://github.com/zfsonlinux/zfs/wiki/RHEL-%26-CentOS)
yum install http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm
yum install zfs
Install libzfs-devel:
yum install libzfs2-devel.x86_64
Run configure in lustre dir, look for:
checking whether to enable zfs… yes
Create zpool:
zpool create mdt /dev/loop5
Format fs, Lustre creates device in zpool:
mkfs.lustre –backfstype=zfs –mgs –mdt –index=0 –fsname test –reformat mdt/mdt
Can also directly use backing loop file:
mkfs.lustre –backfstype=zfs –mgs –mdt –index=0 –fsname test –reformat mdt/mdt /tmp/test-mdt
references:
http://lustre.ornl.gov/ecosystem-2016/documents/tutorials/Stearman-LLNL-ZFS.pdf
http://warpmech.com/?news=tutorial-getting-started-with-lustre-over-zfs
influxdb cloud setup
– create cloud server
– connect with cli client
– create user:
create user kit with password ‘good password’ with all privileges;
– auth
– create database cooldb
– create user to write data points:
create user dbwriter with password ‘good password’;
grant write on cooldb to dbwriter
– create user to read for graph making
create user dbreader
grant read on cooldb to dbreader
– send test datapoint
curl -X POST ‘https://server.influxcloud.net:8086/write?db=cooldb&u=dbwriter&p=good password’ –data-binary ‘test value=1000’