12/16/2017

Git Protocol with Systemd

How to use Git Protocol with Systemd


The documentation from https://git-scm.com/book/gr/v2/Git-on-the-Server-Git-Daemon explains how to execute the git daemon manually and gives an example of an Upstart script to start the daemon. On my Fedora system, I wanted to use Systemd to start this daemon as a service.

I've found the files to make it possible to start the git daemon with Systemd in the /usr/lib/systemd/system directory. They were installed from the git-daemon package. They could be installed with the following command run as root:

# dnf install -y git-daemon

Here's what they look like:

git.socket

[Unit]
Description=Git Activation Socket

[Socket]
ListenStream=9418
Accept=true

[Install]
WantedBy=sockets.target

git@.service

[Unit]
Description=Git Repositories Server Daemon
Documentation=man:git-daemon(1)

[Service]
User=nobody
ExecStart=-/usr/libexec/git-core/git-daemon --base-path=/var/lib/git --export-all --user-path=public_git --syslog --inetd --verbose
StandardInput=socket

Enable and Start git.socket

If you don't need to customize the service, then you can enable it and start it now. Otherwise, you may want to wait until you have customized the git.socket file.

# systemctl enable git.socket --now

Allow Git Home Directory SELinux Access

If getenforce returns either permissive or enforicing, you should enable SELinux to allow access to user home directories if desired by executing:

# setsebool -P git_system_enable_homedirs=true

Verify Listening git.socket

# ss -tlpn '( sport = :9418 )'

State       Recv-Q Send-Q       Local Address:Port                      Peer Address:Port
LISTEN      0      128                     :::9418                                :::*                   users:(("systemd",pid=1,fd=32))


Open Firewall Port

# firewall-cmd --add-port 9418/tcp --permanent
# firewall-cmd --add-port 9418/tcp 

Customizing git@.service

With the --export-all option used in the git@.service file by default, repositories do not even need to contain the magic file daemon-export-ok.

If you have root access, then you can change the git@.service file, but copy it to /etc/systemd/system directory. 

# cp /usr/lib/systemd/system/git@.service /etc/systemd/system

Then, modify the /etc/systemd/system/git@.service file, so your changes will not be overwritten on system package updates.

Here are some things to consider changing:
--base-path to use a different directory instead of /var/lib/git
--user-path to use a different directory instead of public_git for user home directories
--export-all remove to require the daemon-export-ok file before exporting a directory
User=nobody to specify a different user to user to run the service

The User that is specified will need to have permission to access the directories specified with either the --base-path or --user-path options. 

On my system, the repositories that were under the --user-path were failing until I discovered I needed to allow execute(x) access to my home directory. I didn't like the idea of giving that permission to the nobody user, so I added a gitd user:

useradd -r -d /var/lib/git -s /sbin/nologin gitd

Here's my updated /etc/systemd/system/git@.service:

[Unit]
Description=Git Repositories Server Daemon
Documentation=man:git-daemon(1)

[Service]
User=gitd
ExecStart=-/usr/libexec/git-core/git-daemon --base-path=/var/lib/git --user-path=public_git --syslog --inetd --verbose 
StandardInput=socket 
# removed --export-all and changed from nobody to gitd User

Git Repository Sharing

To share the your git repository over your network, you can place it's directory under /var/lib/git. By default, only the root user has permissions to add files in this directory. If you are using SELinux be sure that you copy files or clone them into this location and do not move them there!

As an ordinary user, you would do the following one time:

$ setfacl -m u:nobody:x $HOME # nobody is the User for the service

If you customized the git@.service file, then be sure to use the User specified in that file, such as "gitd" instead of "nobody".

$ mkdir $HOME/public_git # create the directory for --user-path
$ restorecon -Rv $HOME/public_git # if using SELinux

For each repository to share, an ordinary user would do:

$ cd  $HOME/public_git  
$ git clone --bare .git
$ touch /.git/git-daemon-export-ok # if not --export-all

Accessing Remote Repositories

Base Path Repositories (root user)

If the repository is placed in a directory under the --base-path=/var/lib/git such as /var/lib/git/git-new, then it could be cloned remotely by:

git clone git://(host or ip)/git-new


Using tail -f /var/log/messages shows the log from the server 10.0.0.5 when I connected from the client 10.0.0.46:

Dec 16 17:47:32 future audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=git@1-10.0.0.5:9418-10.0.0.46:41646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 16 17:47:32 future git-daemon[25679]: Connection from 10.0.0.46:41646
Dec 16 17:47:32 future git-daemon[25679]: Extended attributes (13 bytes) exist
Dec 16 17:47:32 future git-daemon[25679]: Request upload-pack for '/git-new'

Dec 16 17:47:32 future audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=git@1-10.0.0.5:9418-10.0.0.46:41646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'


User Path Repositories (normal user)


If the repository is placed in a directory under the --user-path=public_git such as:

/home/kwright/public_git/simple.git

Then, it could be cloned remotely by one of the following:

git clone git://(host or ip)/~kwright/simple.git

git clone git://future/~kwright/simple.git

git clone git://10.0.0.5/~kwright/simple.git

Notice that the public_git portion of the path must be omitted in the request.
Using tail -f /var/log/messages shows the log from the server 10.0.0.5 when I connected from the client 10.0.0.46:

Dec 16 17:48:31 future audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=git@2-10.0.0.5:9418-10.0.0.46:41634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 16 17:48:31 future git-daemon[25583]: Connection from 10.0.0.46:41634
Dec 16 17:48:31 future git-daemon[25583]: Extended attributes (13 bytes) exist
Dec 16 17:48:31 future systemd[1]: Started Git Repositories Server Daemon (10.0.0.46:41634).
Dec 16 17:48:31 future git-daemon[25583]: Request upload-pack for '~kwright/simple.git'
Dec 16 17:48:31 future git-daemon[25583]: userpath , request <~kwright/simple.git>, namlen 8, restlen 8, slash
Dec 16 17:48:54 future audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=git@2-10.0.0.5:9418-10.0.0.46:41634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'



11/29/2017

Direct Rules for Firewalld

Direct Rules for Firewalld

Why Firewalld Direct Rules?

  1. You need more power than what's available with simply adding or removing services
  2. You want to make exceptions for certain hosts.
  3. You want to make exceptions for certain networks.
  4. You have experience with iptables, ip6tables, or ebtables commands needed for direct rules.
The documentation for Direct Rules can be found with:

man firewalld.direct

The basic structure of a rule is:

ipv - "ipv4|ipv6|eb" # If rule is iptables, ip6tables or ebtables based
table -"table" # Location of rule in filter, mangle, nat, etc. table
chain - "chain" # Location of rule in INPUT, OUTPUT, FORWARD, etc. chain
priority - "priority" # Lower priority value rules take precedence over higher priority values
rule

If you have with the iptables command, then you should feel comfortable with basic Direct Rules.  Instead of starting with "iptables", the command will start with "firewall-cmd --permanent --direct --add-rule" followed by the rule that follows the basic structure above.

One simple firewall scenario

The web server service should only be available to one host and reject all others. Both actions should be logged.

Whitelist one host for one service

In this scenario, the host 10.0.0.107 would be allowed access to the http service, but any other host (the 0.0.0.0/0 network) would be rejected. Beware, any reject or drop rules are evaluated before accept rules.

firewall-cmd --permanent --direct --add-rule \
ipv4 \
filter \
INPUT 0 \
-p tcp --dport 80 -s 10.0.0.107 \
-j LOG --log-prefix "DIRECT HTTP ACCEPT"  


firewall-cmd --permanent --direct --add-rule \
ipv4 \
filter \
INPUT 1 \
-p tcp --dport 80 -s 10.0.0.107 \
-j ACCEPT


firewall-cmd --permanent --direct --add-rule \
ipv4 \
filter \
INPUT 2 \
-p tcp --dport 80 \
-j LOG --log-prefix "DIRECT HTTP REJECT"    

firewall-cmd --permanent --direct --add-rule \
ipv4 \
filter \
INPUT 3 \
-p tcp --dport 80 -s 10.0.0.107 \
-j REJECT --reject-with icmp-host-unreachable

Since these rules were added with the --permanent option, they are not active in the runtime rules, yet. So, to make the permanent rules active, use the --reload option.

firewall-cmd --reload
I discovered that you have to "get" the rules instead of querying for a "list" of them:

firewall-cmd --direct --get-all-rules

ipv4 filter INPUT 0 -p tcp --dport 80 -s 10.0.0.107 -j LOG --log-prefix 'DIRECT HTTP ACCEPT'
ipv4 filter INPUT 1 -p tcp --dport 80 -s 10.0.0.107 -j ACCEPT
ipv4 filter INPUT 2 -p tcp --dport 80 -j LOG --log-prefix 'DIRECT HTTP REJECT'
ipv4 filter INPUT 3 -p tcp --dport 80 -s 10.0.0.107 -j REJECT --reject-with icmp-host-unreachable


Rich rules for Firewalld

Rich rules for Firewalld

Why Firewalld Rich Rules?


  1. You need more power than what's available with simply adding or removing services
  2. You want to make exceptions for certain hosts.
  3. You want to make exceptions for certain networks.
  4. You don't experience with iptables command needed for direct rules.

Basic Documentation

The man page for firewall-cmd does not cover rich rules. To get the man page about them use:

   man firewalld.richlanguage to view the details like... 

       A rule is part of a zone. One zone can contain several rules. If some rules
       interact/contradict, the first rule that matches "wins".

       General rule structure

           rule
             [source]
             [destination]
             service|port|protocol|icmp-block|icmp-type|masquerade|forward-port|source-port
             [log]
             [audit]
             [accept|reject|drop|mark]


Summary Firewalld Rich Rules common options:

rule [family="ipv4|ipv6"] 
source [not] address="address[/mask]"|mac="mac-address"|ipset="ipset"
destination [not] address="address[/mask]"
port port="port value" protocol="tcp|udp"
log [prefix="prefix text"] [level="log level"] [limit value="rate/duration"]
accept [limit value="rate/duration"]
reject [type="reject type"] [limit value="rate/duration"]
drop [limit value="rate/duration"]
mark set="mark[/mask]" [limit value="rate/duration"]

Working with reject action

Actually, I think for the reject action above, the type argument is mandatory. For the reject action, the type must use one of: 

icmp-host-prohibited, host-prohib, icmp-net-unreachable, net-unreach, icmp-host-unreachable, host-unreach, icmp-port-unreachable, port-unreach, icmp-proto-unreachable, proto-unreach, icmp-net-prohibited, net-prohib, tcp-reset, tcp-rst, icmp-admin-prohibited, admin-prohib 

Two simple firewall scenarios

Let's take two services running on a host, a web server and a dns server. The web server service should only be available to one host and reject all others. The dns server service should be available to all hosts except one, and drop all others.

Whitelist one host for one service

In this scenario, the host 10.0.0.107 would be allowed access to the http service, but any other host (the 0.0.0.0/0 network) would be rejected. Beware, any reject or drop rules are evaluated before accept rules.

firewall-cmd --add-rich-rule='
rule family=ipv4 
source address="10.0.0.107" 
service name="http" 
log prefix="RICH HTTP ACCEPTED" 
accept' 

firewall-cmd --add-rich-rule='
rule family=ipv4 
source NOT address="10.0.0.107" 
service name="http" 
log prefix="RICH HTTP REJECTED " 
reject type="icmp-host-prohibited"

Monitoring the RICH HTTP firewall log

To see attempts to connect either be accepted or rejected, try to access the web server from the host 10.0.0.107 and 10.0.0.108, respectively after executing the following on a host like 10.0.0.5 with the httpd service running:

tail -f /var/log/messages | grep 'RICH HTTP '

Blacklist one host for one service

In this scenario, the host 10.0.0.107 would be blacklisted from accessing the DNS service, and its attempts to connect will be dropped. All other hosts will be accepted for access:

firewall-cmd --add-rich-rule='
rule family=ipv4 
source address="10.0.0.107" 
service name="dns" 
log prefix="RICH DNS DROPPED " 
drop'

firewall-cmd --add-rich-rule='
rule family=ipv4 
source address="0.0.0.0/0" 
service name="dns" 
log prefix="RICH DNS ACCEPTED " 
accept' 

Monitoring the RICH DNS firewall log

To see attempts to connect to the DNS server either be accepted or rejected, try to access the dns server from the host 10.0.0.107 and 10.0.0.108, respectively after executing the following on a host like 10.0.0.5 with the httpd service running:

tail -f /var/log/messages | grep 'RICH DNS '

Rich Rule Persistence

For the rules entered above to be maintained across restarting firewalld or the system, they need to be either added again with the --permanent option, or you can use the --runtime-to-permanent option to preserve the rules in the default zone (You can also create rich rules in other zones using the --zone option).


firewall-cmd --runtime-to-permanent

After the above command is executed the rules are saved to a file like /etc/firewalld/zones/public.xml based upon the default active zone. Although you can use the firewall-cmd --remove-rich-rules option to delete rich rules that you no longer want, you can also edit the zone xml file directly, and then use:

firewall-cmd --reload

Other Useful firewall-cmd commands:

firewall-cmd --get-active-zones
firewall-cmd --list-all
firewall-cmd --list-all-zones
firewall-cmd --list-rich-rules
firewall-cmd --help


11/26/2017

Installing Kubernetes on CentOS 7 with kubeadm

Kubernetes on CentOS 7

Prepare CentOS 7 for Kubernetes for Master and Worker

Disable SELinux Enforcement

Update the file /etc/selinux/config:

SELINUX=permissive

To avoid rebooting to have that become effective, execute:

setenforce 0


Disable swap

Swap must be disabled for the kubeadm init process to complete. Edit the /etc/fstab file and comment out the swap entry. For example:

In the file /etc/fstab comment out the line(s) containing swap:
#/dev/sda5 swap                    swap    defaults        0 0

To avoid rebooting to have that become effective, execute:

swapoff -a


Configure the firewall services



Create the k8s-master.xml and k8s-worker.xml files


cd /etc/firewalld/services

wget \
https://raw.githubusercontent.com/wrightrocket/k8s-firewalld/master/k8s-master.xml

wget \
https://raw.githubusercontent.com/wrightrocket/k8s-firewalld/master/k8s-worker.xml



Reload the firewall 


To make the new services available for use, the firewall must be reloaded. Execute the following to avoid rebooting:

firewall-cmd --reload

Apply the firewall rules


On the master execute:

firewall-cmd --add-service k8s-master 
firewall-cmd --add-service k8s-master --permanent

On worker nodes execute:
firewall-cmd --add-service k8s-worker
firewall-cmd --add-service k8s-worker --permanent


Create Kubernetes Yum Repository

cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF


Install the packages


yum install -y docker kubelet kubeadm kubectl 

Configure the Kubelet service

Add to the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file $KUBELET_KUBECONFIG_ARGS:

--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice

Reload systemd
For the updated kubelet configuration to be recognized, systemd must be reloaded.

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

Enable the Docker service

systemctl enable docker --now

Create the needed sysctl rules

cat  > /etc/sysctl.d/k8s.conf <
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
HERE

Apply the sysctl rules

sysctl --system

Installing Kubernetes on CentOS 7 on Master

Initialize the Master Node

Since the flannel network will be used with the kubernetes cluster, the --pod-network-cidr option is used to specify the network that will be used, which will match the network in the kube-flannel.yml file applied later.

kubeadm init --pod-network-cidr 10.244.0.0/16

Configure kubectl for user

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Verify node is Ready

kubectl get nodes

NAME           STATUS    ROLES     AGE       VERSION
kate.lf.test   Ready     master    2m        v1.8.3


Verfify kube-system Pods are Ready

kubectl get pods --all-namespaces

NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
default       website-7cd5577444-xfp6s               1/1       Running   0          8m
kube-system   etcd-kate.lf.test                      1/1       Running   4          2d
kube-system   kube-apiserver-kate.lf.test            1/1       Running   5          2d
kube-system   kube-controller-manager-kate.lf.test   1/1       Running   7          2d
kube-system   kube-dns-545bc4bfd4-9tgcv              3/3       Running   14         2d
kube-system   kube-flannel-ds-gbzhp                  1/1       Running   2          1d
kube-system   kube-proxy-l9fts                       1/1       Running   3          2d
kube-system   kube-scheduler-kate.lf.test            1/1       Running   6          2d

Retrieve the Configuration for Flannel

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Apply the Flannel Network 

kubectl apply -f kube-flannel

Installing Kubernetes on CentOS 7 on a Worker

Retrieve the Token

On the Master node, retrieve the token that was generated during the installation.

kubeadm token list

TOKEN     TTL       EXPIRES   USAGES    DESCRIPTION   EXTRA GROUPS


If no token is shown , then a new token can be generated. The original installation token expires after one day, but the option --ttl 0 can be used with kubeadm token create to create a token that never expires.

kubeadm token create --ttl 0
33d628.3d1c0bf58ab1a68a


Join the Cluster

On the Worker node, join the cluster. Use the token from the previous step and the IP address of your master node.

kubeadm join --token 33d628.3d1c0bf58ab1a68a 10.0.0.108:6443

Install the flannel package

yum -y install flannel

This package is installed after the flannel network so that the flanneld and docker services will start correctly.

Configure flannel

The etcd prefix value in the file /etc/sysconfig/flanneld is not correct, so the flanneld will fail to start as it is not able to retrieve the prefix given. The value of FLANNEL_ETCD_PREFIX must changed to the following:

#FLANNEL_ETCD_PREFIX="/atomic.io/network"
FLANNEL_ETCD_PREFIX="/coreos.com/network"

Enable and start flanneld

systemctl enable flanneld --now

This enables and starts flanneld. Since docker has a dependency on flanneld, it will also be restarted, so it may take a while.

Configure kubectl for user

mkdir -p $HOME/.kube


sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


Verify Nodes are Ready

It may take several minutes for all the nodes to get to "Ready" status.

kubectl get nodes

NAME           STATUS    ROLES     AGE       VERSION
kate.lf.test   Ready     master    10d       v1.8.3
kave.lf.test   Ready         4d        v1.8.3



11/13/2017

Docker Basics

Docker Basics


This post isn't going to be about everything Docker, it is just about the basics of using Docker in a Linux operating system.  In Fedora 25, I used the following commands.


Getting Started with Docker

First, you need to get Docker software installed on your Linux system using your package manager.
In Fedora, I executed the following command to install docker:
dnf install docker
or in SUSE:
zypper install docker
or in Debian/Ubuntu:
apt-get install docker

Next, you need to start and enable the docker service.

systemctl start docker
systemctl enable docker

To verify that your installation is successful execute:

docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

Exploring Docker


If all went well, then you can begin to search for images that you might want to use like "minimal", "centos", "suse", "ubuntu" or "mariadb".

docker search minimal

If you execute the docker command without any sub-commands, then it will display a summary of the usage of the docker command:


docker
Usage: docker [OPTIONS] COMMAND [arg...]
       docker [ --help | -v | --version ]

A self-sufficient runtime for containers.

Options:

  --config=~/.docker              Location of client config files
  -D, --debug                     Enable debug mode
  -H, --host=[]                   Daemon socket(s) to connect to
  -h, --help                      Print usage
  -l, --log-level=info            Set the logging level
  --tls                           Use TLS; implied by --tlsverify
  --tlscacert=~/.docker/ca.pem    Trust certs signed only by this CA
  --tlscert=~/.docker/cert.pem    Path to TLS certificate file
  --tlskey=~/.docker/key.pem      Path to TLS key file
  --tlsverify                     Use TLS and verify the remote
  -v, --version                   Print version information and quit

Commands:
    attach    Attach to a running container
    build     Build an image from a Dockerfile
    commit    Create a new image from a container's changes
    cp        Copy files/folders between a container and the local filesystem
    create    Create a new container
    diff      Inspect changes on a container's filesystem
    events    Get real time events from the server
    exec      Run a command in a running container
    export    Export a container's filesystem as a tar archive
    history   Show the history of an image
    images    List images
    import    Import the contents from a tarball to create a filesystem image
    info      Display system-wide information
    inspect   Return low-level information on a container, image or task
    kill      Kill one or more running containers
    load      Load an image from a tar archive or STDIN
    login     Log in to a Docker registry.
    logout    Log out from a Docker registry.
    logs      Fetch the logs of a container
    network   Manage Docker networks
    node      Manage Docker Swarm nodes
    pause     Pause all processes within one or more containers
    port      List port mappings or a specific mapping for the container
    ps        List containers
    pull      Pull an image or a repository from a registry
    push      Push an image or a repository to a registry
    rename    Rename a container
    restart   Restart a container
    rm        Remove one or more containers
    rmi       Remove one or more images
    run       Run a command in a new container
    save      Save one or more images to a tar archive (streamed to STDOUT by default)
    search    Search the Docker Hub for images
    service   Manage Docker services
    start     Start one or more stopped containers
    stats     Display a live stream of container(s) resource usage statistics
    stop      Stop one or more running containers
    swarm     Manage Docker Swarm
    tag       Tag an image into a repository
    top       Display the running processes of a container
    unpause   Unpause all processes within one or more containers
    update    Update configuration of one or more containers
    version   Show the Docker version information
    volume    Manage Docker volumes
    wait      Block until a container stops, then print its exit code

Run 'docker COMMAND --help' for more information on a command.

For the next recommended test, try out the ubuntu image.  To start the bash shell inside the ubuntu container, execute:

docker run -it ubuntu bash

root@bb21045eda79:/#

To see the this container running from another terminal execute:

docker ps

CONTAINER ID    IMAGE            COMMAND   CREATED            STATUS              PORTS        NAMES
bb21045eda79        ubuntu              "bash"              8 seconds ago       Up 5 seconds                            hopeful_fermi

The name shown "hopeful_fermi" was automatically generated. Replace this name with the name automatically generated by your system in the command below.

To stop this container  from another terminal execute:

docker stop hopeful_fermi
hopeful_fermi

To start a container based upon the ubuntu image with a specific name like "zesty" execute:

docker run -it --name "zesty" ubuntu bash
root@2658f64e637a:/#

To see the this container running from another terminal execute:

docker ps

CONTAINER ID    IMAGE            COMMAND   CREATED            STATUS              PORTS        NAMES

2658f64e637a        ubuntu              "bash"              37 seconds ago      Up 35 seconds                          zesty


Saving Docker Container Changes

What if you've customized your docker container, and you want to launch the customized changes the next time you use it? One technique that can be used with the docker is the commit sub-command to save the changes to a new container. 

Start running a docker container for the mariadb database mapping with -p local:remote and -e ENVIRONMENT_VARIABLE=value:

docker run  -p 3306:3306 -e MYSQL_ROOT_PASSWORD='secret' mariadb

The name that you see for your container may not be what you want it to be when you use the command to view running processes. You can use the name sub-command or pass a --name option when beginning to run the container.

docker ps

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                              NAMES
0675838efe90             mariadb              "docker-entrypoint.sh"   46 minutes ago      Up 46 minutes       0.0.0.0:3306->3306/tcp   wright-mariadb


You may want to rename the container based upon the image like:

docker rename 0675838efe90 colorsdb

Then, you can save the changes to your container with a docker commit command:

docker commit colorsdb

For more details, you can look at the help available for the commit command.

docker commit --help

Usage:  docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
Create a new image from a container's changes
Options:  -a, --author string    Author (e.g., "John Hannibal Smith ")  -c, --change value     Apply Dockerfile instruction to the created image (default [])      --help             Print usage  -m, --message string   Commit message  -p, --pause            Pause container during commit (default true)


To stop running the container, you can now use:

docker stop colorsdb

To start the container with the changes that have been committed to it after it has been stopped, you can use:

docker start colorsdb


Using Docker Containers


So now what? You can download and run an image and create a container or execute commands inside of it. You might be wondering like me, how do you use this container? 

In order to access to use some containers, you can access these containers through ports they expose to the host system or to other containers.



To connect to the container that has exposed port 3306 on the localhost to map to 3306 on the remote the machine, you could execute:


mysql -h localhost -P 3306 --protocol=tcp -p







4/24/2017

Solving CA Certificate Errors

Solving CA Certificate Errors


Do you get an CA Certificate error of "unable to get local issuer certificate" or
"certificate verify failed?"

Copy the certificate to /usr/local/share/ca-certificates 
or
/usr/share/ca-trust

Then execute:
update-ca-certificates
or
update-ca-trust

Read the man page on update-ca-certificates for more details.

10/07/2016

Android Studio 2.2 Ready for Prime Time!

Android Studio 2.2 Ready for Prime Time!

I've tried Android Studio several times over the last year or so, but each time, it was not ready for prime time. By this statement, I mean that by default, it was so broken so that it was unable to build a new unmodified project. To be honest, Eclipse for Android Developers has been similarly broken, and was broken at the time I wrote this.

Recently, Google released Android Studio 2.2,0.12 and it is ready to build apps for API 24 right out of the box. I'm excited about being able to start developing without having to fight to get my build environment working!

8/23/2016

JOOSAN NVR Admin Password Reset

JOOSAN NVR Admin Password Reset

I purchased a 4 Camera Wireless Surveillance NVR kit from CCTV Systems on Amazon in 2016. Somehow the password for the admin account stopped working, and I don't remember changing it. It had better not happen again, or I might return the unit! Without the admin password, I was unable to change any settings on the device.

When I got stuck without being able to login as admin, thankfully the vendor, CCTV Systems was able to tell me how to reset the admin password. Apparently, if they know the date on your unit, then they can generate a password for admin account that will allow a login, which is a bit scary. Maybe my unit won't be running with the actual date...

When I told them that the date had been reset to 1970/01/01 because I had unplugged the battery from the unit in trying to reset the system, they gave me a different solution. Here is what I was told via email:

This date is not normal.
So now you can on the screen of login,and input the wrong password,when it pop-up a message with invalid password,you can right-click,left-click  with the mouse,cycle times.Then it will let you reset the user and password.

This led to me developing the following procedure to reset the admin password for the JOOSAN NVR:

  1. Unplug the unit.
  2. Open the top cover by unscrewing the necessary screws.
  3. Remove the battery from the unit by pulling it up.
  4. Replace the battery and cover after a minute or two.
  5. Power the unit back on and wait until system is initialized.
  6. Right-click to try System Setup.
  7. Attempt to login as admin with the wrong password.
  8. Alternately, right-click and left-click several times. 
  9. A dialog should pop up and let you reset the admin password back to nothing or being blank.

Here are some steps you will want to perform after resetting the system.


Reset the time:

  1. Right-click to go to System setup.
  2. Click General setup on the top, Time setup on the left, and set up the time in the middle.
  3. Be sure to click Apply before you click OK.

Reset the admin user password:

  1. Right-click to go to System setup.
  2. Click System Admin on the top, User management on the left.
  3. Select the row of the admin user.
  4. Click the Set password button.
  5. Type the old admin password or leave it blank after resetting the system.
  6. Type and repeat the new password.
  7. Click the Ok button.
Since resetting the admin password, I have also created an extra super user account to avoid being locked out in the future by using System Admin, User management, Add user. In addition, I created an account for family members that can be used for just viewing the cameras.

I hope this helps you! Best wishes!

8/21/2016

LDAP: On-Line Configuration (OLC) and Static slapd.conf

LDAP: On-Line Configuration (OLC) and Static slapd.conf


Installing OpenLDAP 

To install both the client and server packages on RHEL/CentOS 7:

yum -y install openldap-servers openldap-clients

Enable and start the service:

systemctl enable slapd
systemctl start slapd


OLC

Until OpenLDAP 2.3, an OpenLDAP server was configured by editing a /etc/openldap/slapd.conf. This required that the server had to be restarted to make changes to the server configuration.

With OpenLDAP 2.3+ On-Line Configuration of the server was made possible by adding a Directory Information Tree (DIT) called cn=config.

To view the OLC, you can execute as root:

ldapsearch -H ldapi:/// -Y EXTERNAL -b cn=config


olcSuffix, olcRootDN and olcRootPW
The first step in configuring your domain will be to set the suffix for the DIT for your domain, the information about the administrative user's Distinguished Name (DN) and password. The olcRootDN must end with the same suffix specified by the olcSuffix.

Create an LDIF file with the follow contents updated for your own domain, and the olcRootPW generated by executing slappasswd. This information can then be modified on the LDAP server with the following command:

ldapmodify -H ldapi:/// -Y EXTERNAL -f olc-root.ldif

olc-root.ldif:


dn:  olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=samba,dc=org
-
replace: olcRootDN
olcRootDN: cn=admin,dc=samba,dc=org
-
replace: olcRootPW
olcRootPW: {SSHA}GTeZbB7rpAMtPHVNxBZFN6ZFhwe+kINv

Configuring Logging with OLC

ldapmodify -H ldapi:/// -Y EXTERNAL -f olc-logging.ldif

olc-logging.ldif:

dn: cn=config
changetype: modify
add: olcLogFile
olcLogFile: /var/log/slapd.log
-
add: olcLogLevel
olcLogLevel: filter config acl



Configuring Organizational Units(OUs)

If want to configure the LDAP Directory to contain information for authenticating users of your domain, then you will need to create the following dcObject, organization, and organization unit entries. The simpleSecurityObject and organizationalRole entry can be used as a administrator account for the suffix. Entries for this suffix will need to be modified using the DN of this LDAP Administrator entry.

Create an LDIF file with the follow contents updated for your own domain, and then update the LDAP server by executing:

ldapadd -D cn=admin,dc=samba,dc=org -w secret -f olc-domain.ldif

olc-domain.ldif:

dn: dc=samba,dc=org
objectClass: top
objectClass: dcObject
objectClass: organization
o: samba.org
dc: samba

dn: cn=admin,dc=samba,dc=org
objectClass: simpleSecurityObject
objectClass: organizationalRole
cn: admin
description: LDAP administrator
userPassword: secret

dn: ou=users,dc=samba,dc=org
objectClass: top
objectClass: organizationalUnit
ou: users

dn: ou=groups,dc=samba,dc=org
objectClass: top
objectClass: organizationalUnit
ou: groups

dn: ou=idmap,dc=samba,dc=org
objectClass: top
objectClass: organizationalUnit
ou: idmap

dn: ou=computers,dc=samba,dc=org
objectClass: top
objectClass: organizationalUnit
ou: computers


Configuring OLC Schema

To discover which schema have been added to your server, you can execute the following query:

ldapsearch -H ldapi:/// -Y EXTERNAL -b cn=schema,cn=config cn

Most installations will only have the "core" schema installed. The others that are often added for use in authentication by executing the following commands in order, otherwise an attribute that may be defined in one schema that cannot be referenced will prevent adding another schema. 

ldapadd -H ldapi:/// -Y EXTERNAL -f /etc/openldap/schema/cosine.ldif

ldapadd -H ldapi:/// -Y EXTERNAL -f /etc/openldap/schema/corba.ldif

ldapadd -H ldapi:/// -Y EXTERNAL -f \ /etc/openldap/schema/inetorgperson.ldif

ldapadd -H ldapi:/// -Y EXTERNAL -f \
/usr/share/doc/samba-4.2.3/LDAP/samba.ldif





About Me - WrightRocket

My photo

I've worked with computers for over 30 years, programming, administering, using and building them from scratch.

I'm an instructor for technical computer courses, an editor and developer of training manuals, and an Android developer.