Thursday, November 23, 2017

Docker networking issue - IPV4 forwarding is disabled error and Fix



During docker build, faced the following error:

Docker Networking Disabled: WARNING: IPv4 forwarding is disabled. Networking will not work

So, all yum install commands failed.

Fix:

Added the following to /etc/sysctl.conf:

net.ipv4.ip_forward=1


Then restarted the network service.
Now, docker build should be able to reach the network.


Note to self:
Next, time keep an eye on all the warnings faced during docker build itself :) 




Tuesday, November 21, 2017

list files in installed package/rpm file


list the files for already INSTALLED package:

rpm -ql package-name


To list the files for specific RPM package downloaded:

rpm -qlp package.rpm

--

To list the rpm  which installed the specific file(executable / configuration file).

rpm -qf   /path/to/the/file

--

Friday, November 17, 2017

To have unique machine id in multiple VMs - Centos





When you clone a VM, sometimes VM id may not be unique and you end up with same VM id for multiple machines.

So, if that is the case just remove /etc/machine-id and then reboot

Then, run systemd-machine-id-setup.

Now, you will have a unique id in /etc/machine-id.   Now, Live happily ever after :) :) 

# systemd-machine-id-setup
Initializing machine ID from KVM UUID.

# cat /etc/machine-id 
4cd635a844fb4219bb03f8726a64abca


Original error faced while using ansible:

failed: [centosmaster] (item=tmaster) => {
    "assertion": "ansible_hostname == hostvars[item]['ansible_hostname'] or ansible_machine_id != hostvars[item]['ansible_machine_id']", 
    "changed": false, 
    "evaluated_to": false, 
    "failed": true, 
    "item": "tmaster"
}

Monday, November 13, 2017

vagrant - unable to reach host - fix



I was trying to reach a url from inside a vagrant based virtual machine..but unable to reach the same.

It errors out like:

Name or service not known

Tried multiple options - and ended up restarting libvirtd service at host did the trick.

So, sudo systemctl restart libvirtd 


Underlying libvirtd, dnsmasq is running (at host) and I think it is causing some issue.


So, next time, just restart libvirtd and check :)

Monday, November 6, 2017

checkout a github Pull Request locally


So, you have cloned a repo and working on it.

You have a Pull Request(PR) in *upstream* repo which you wish to clone locally.


First set the upstream:

$ git remote add upstream https://github.com/organization/repo


Then, do the below:

$ git fetch upstream pull/195/head:my_testing

remote: Counting objects: 32, done.
remote: Total 32 (delta 8), reused 8 (delta 8), pack-reused 24
Unpacking objects: 100% (32/32), done.
From https://github.com/gluster/gluster-kubernetes
 * [new ref]         refs/pull/195/head -> my_testing

Here,
upstream -> branch from which you actually forked.
195 -> PR number
my_testing -> new branch name


$ git checkout my_testing
Switched to branch 'my_testing'


PS: If you have not forked but cloned the original repo itself, use "origin" instead of "upstream".
Use git config -l  to  see origin, upstream branches.



Friday, October 27, 2017

Template creation from a Virtual Machine

Template  creation from a Virtual Machine:

Before creating a template out of a virtual machine, we need to delete unique stuff related to a VM. For example, hostname, mac address, UUID.

You can use the following commands to make it generic enough and then save the template.


hostnamectl set-hostname localhost.localdomain

Remove the entries starting with 'UUID" & "HWADDR" in /etc/sysconfig/network-scripts/ifcfg-eth0 ( on all the interfaces, in case of more than one NIC available ).

Delete all but one file




Delete all but one file:

ls | grep -v testfile.tar | xargs rm 

So, all files except testfile.tar will be deleted.


use "xargs rm -rf"   if you wish to delete directories as well.


vagrant + ansible

vagrant + ansible

I have followed this neat tutorial https://fedoramagazine.org/using-ansible-provision-vagrant-boxes/


After trying out the above, I have tried out provisioning by modifying ansible file.

So, Add the below lines in lamp.yml and then:

# vagrant provision 

lamp.yml
- name: Creates directory
  file: path=/tmp/newdirectory state=directory
The above makes changes in running image and can observed using vagrant ssh
Note, you need to do vagrant ssh in the same directory to verify same. (lampbox in this example).

Friday, September 29, 2017

Direct link to google docs




Use direct link to google docs(s):

newdoc - http://docs.google.com/?action=newdoc
newsheet - http://spreadsheets.google.com/ccc?new
newslide - https://docs.google.com/presentation/u/0/
newimage - https://docs.google.com/drawings/create?hl=en

Bookmark as above in your favourite browser and access google doc directly for new files.
For example, newsheet for opening a new spreadsheet.

Tuesday, September 12, 2017

Setting up thin volume steps


Thin LVM creation steps:


# Creating Virtual disk(loop device) using testlvm file
dd if=/dev/zero of=/root/testlvm.img bs=1024 count=600000

# Link the file created to loop device
losetup /dev/loop1 /root/testlvm.img
    # this will find first free loop device to use and attach to the file.
losetup -f /root/testlvm.img


   To verify:
   losetup -a


#  Make disk as lvm disk
pvcreate /dev/loop1

Note: use #pvdisplay, #pvscan for physical volume information.

# Create volume group
vgcreate test_VG /dev/loop1

Note: - use vgs, vgdisplay for info .

#  Create a thin LV pool
lvcreate -L 500M -T test_VG/mythinpool

Note: #lvdisplay test_VG or #lvs test_VG for verify

#  Create thin logical volume (LV) out of  LV pool created
lvcreate -V1000M  -T test_VG/mythinpool -n thinvolume

Note: #lvdisplay test_VG or #lvs test_VG for verify.
Note, -V is virtual size and it is more than size mentioned above.

#  Format the LV
mkfs.ext4 /dev/test_VG/thinvolume

# Mount the LV
mount /dev/test_VG/thinvolume /mnt

Refer:
man 7 lvmthin
man 8 lvm
man 8 losetup


Friday, August 18, 2017

s2i scripts override in OpenShift

This  post is about how to override s2i scripts in application source repo. itself.

This way we can avoid to build the builder image again (which is where the build scripts usually present).

Add .s2i/bin/<all your build scripts> in github repo.

Example: https://github.com/SaravanaStorageNetwork/tests2i/

Then you can build like this:
s2i build https://github.com/SaravanaStorageNetwork/tests2i.git nginx-centos7 nginx-centos7-s2i-testing     

Where,
- nginx-centos7 is the builder image already built.
- nginx-centos7-s2i-testing  is the output image.
- https://github.com/SaravanaStorageNetwork/tests2i.git  - repo. having all source files and here it includes .s2i/bin build scripts too.


Note:
1.
if the source is in local directory, you need to specify build scripts using -s option as below:
s2i  build --pull-policy=never --loglevel=2 file:///root/template/s2i-using-s3/examples/nginx-centos7/test/test-app nginx-centos7 nginx-centos7-testapp     -s  file:///root/template/s2i-using-s3/examples/nginx-centos7/test/.s2i/bin/

2.
Ensure LABEL io.openshift.s2i.scripts-url= is removed from builder image Dockerfile.

Friday, August 4, 2017

How long the process is running ?


How long the process is running ?

ps -o etime= -p  <pid of process>



Example:
# ps -o etime= -p 15245
      00:26

Where 15245 is PID of the process we want to check.
it's running for past 26 seconds.

Wednesday, August 2, 2017

thunderbird tip - avoid double spacing


While using Thunderbird mail client, it uses "Double space" instead of single space ( not sure how I ended up like this).

The issue is selecting paragraph as default..if you select "Body text" it will be single space.

To make this default, carry out below:

  1. Go to the main Thunderbird window and select the following from the menu bar -Tools > Options > Composition > General tab,
  2. Uncheck the box that says "when using paragraph format, pressing the enter key creates a new paragraph'.

Monday, July 31, 2017

Get notification at specific time






Run below command in your terminal :

echo ' notify-send "Meeting with Munusamy in 5 minutes" ' | at 1:25PM



echo 'notify-send "Get your tea!"' | at now + 3 minutes

echo 'notify-send "Meeting in 1 hour with the big boss!"' | at 1pm tomorrow 



Commands used:
notify-send [OPTION...] <SUMMARY> [BODY] - create a notification

at and batch read commands from standard  input  or  a  specified  file
       which are to be executed at a later time, using /bin/sh.


source: https://superuser.com/questions/38654/pop-up-notification-when-time-reaches-400pm





Thursday, July 27, 2017

full screen mode in gvim editor



You need to set the following in your vimrc (mine is /etc/vimrc):



map <silent> <F11>
\    :call system("wmctrl -ir " . v:windowid . " -b toggle,fullscreen")<CR>



Also, you need to have this package installed:
sudo yum install wmctrl.x86_64 -y 


Friday, July 14, 2017

openshift - oc cluster up in specific address



When you do 'oc cluster up' , you can access openshift console using 127.0.0.1.

But if you wish to specify IP address to access your openshift console make use of --public-hostname=<IP address>

This way you can access the  cluster even from outside (but within LAN ofcourse)


# oc cluster up --public-hostname=192.168.122.152

===============================================

Also, to retain configuration from previous run:

oc cluster up --host-data-dir=/mydata     --use-existing-config 

Where mydata is the directory created - ensure it has adequate permissions.


============================================

Also, you can ask for specific version like below:

oc cluster up  --public-hostname=10.70.43.54   --version=v3.9 --service-catalog 

=============

Another example (with all above combined):

oc cluster up 
--image=registry.access.redhat.com/openshift3/ose \
--version=v3.9 \
 --service-catalog=true \
 --routing-suffix=10.70.43.54.nip.io \
 --public-hostname=10.70.43.54.nip.io \
 --loglevel=2 \
 --host-data-dir=/mydata \
 --use-existing-config


Where, 10.70.43.54  is your node's IP address.

===============================================

Friday, July 7, 2017

Store and Run docker images locally & inspect contents of docker image


==========================================

Store and Run docker images locally:


How to save image locally and use it:

// You have built an image named s3-store-build

// Now save locally
$ docker save s3-store-build > s3-store-build.tar

$ ls -ltrsh
261M -rw-rw-r--  s3-store-build.tar

// Load the image - ready to run:
$ docker load --input s3-store-build.tar

==========================================

Inspecting contents of docker image:

# get your image name from this command

docker images


# here 172.31.17.15:5000/openshift/test-apb is the image name , save it to a tar file

docker save 172.31.17.15:5000/openshift/test-apb > s3.tar


mkdir test

cd test

# extract entire image using tar
tar xf ../s3.tar


# now extract individual tar files present
for filename in `find . -name  "*.tar" `; do tar xf  $filename; done


# Now, you check whether the file you are looking for is present in the image
find . -name <some_test_file_whether_present.txt>

=================================================

Push image to docker hub:

How to push image to docker hub:

# docker login


// build a image using Dockerfile in PWD
# docker build -t   store-build .


# docker images | grep store-build

# docker tag <db548e132e30> saran/store-build

# docker push  saran/store-build  

Thursday, July 6, 2017

Pulling docker images using url




Pulling docker images:

Generally, you pull a public URL like:

docker pull gluster/gluster-centos



Sometimes you want to directly pull the Docker image using URL:

docker pull https://hub.docker.com/r/gluster/gluster-centos/


The above can be any URL where Docker image present.

Thursday, June 29, 2017

Combine multiple commits into single one - squash your github commits




How to combine multiple commits into single one?

While carrying out a pull request, I ended up with 7 commits in the same pull request. 

I wish to combine all those 7 commits into a single one.( So it is easier to merge and unnecessary  history is removed)


# carrying out squash:

You want to squash all 7 commits into one:

# git rebase -i HEAD~7

 

It will launch a editor.

# Leave first commit as is.

# From 2 - 7 , change (edit) pick as squash

Now, It will squash the changes and launches editor for  adding commit message. (which actually contains all 7 commit messages).
You can edit the commit message as you wish.

Note: any commit message with # is comment. you need to delete unneeded commit message, else it will only add confusion.





# pushing changes:


Now, I have already pushed all those 7 commits into the github's pull request

So, I need to do force push:

# git push origin  <your branch name>  --force

that's it. :)

You can verify your github pull request. 
 

markdown preview in google-chrome


You are updating a markdown file and wish to preview how your changes look like.

You can install the below addon:

https://chrome.google.com/webstore/detail/markdown-preview-plus/febilkbfcbhebfnokafefeacimjdckgl?utm_source=chrome-app-launcher-info-dialog


You can edit markdown file using your favourite editor and  preview your changes live in your chrome browser.

Wednesday, June 21, 2017

Execute ansible-playbook faster




Execute ansible faster:

Set the below value in /etc/ansible/ansible.cfg (or) wherever your configuration file is:

------------------------------
[ssh_connection]
pipelining = True
------------------------------




How this helps? 

===========================

pipelining

Enabling pipelining reduces the number of SSH operations required to execute a module on the remote server, by executing many ansible modules without actual file transfer. This can result in a very significant performance improvement when enabled, however when using “sudo:” operations you must first disable ‘requiretty’ in /etc/sudoers on all managed hosts.
By default, this option is disabled to preserve compatibility with sudoers configurations that have requiretty (the default on many distros), but is highly recommended if you can enable it, eliminating the need for Accelerated Mode:

pipelining = False
===========================
Source:  http://docs.ansible.com/ansible/intro_configuration.html#pipelining

Bringup a network interface






========================

If the interface(for eg: eth0) is not UP automatically on system boot, you can temporarily  bring up the interface like:

# dhclient <eth0>

========================

To make it permanent, edit /etc/sysconfig/network-scripts/ifcfg-eth0
Set ONBOOT as yes as:
---------------------------
...
ONBOOT=yes
---------------------------

========================
Here, eth0 is the interface configured.


Tuesday, June 20, 2017

Progress of copy operation


See progress while carrying out copy operation: 

rsync --info=progress2 <source> <destination>

While using cp command, there is currently no direct way to check the progress..you can make use of rsync with 'info' flag as above to see the progress of copy operation.


Note:
rsync dir1/  dir2

This will copy contents inside dir1/ to dir2


rsync dir1  dir2

This will copy dir1 and its contents into dir2/

Tuesday, June 13, 2017

port opened in your machine



Check whether a specific port is opened on your machine:

You can make use of "netstat -tuplen"

# netstat -tuplen



For example, httpd (apache) server listens @ 80 port

# netstat -tuplen | grep httpd
tcp6       0      0 :::80                   :::*                    LISTEN      0          269550     15610/httpd        





Tuesday, June 6, 2017

sudo su to execute bash( and avoid sh)




Problem:

"sudo su" does not read /etc/bashrc and executes "sh" instead of bash



Solution:

You need to add the following lines in your /root/.bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi


that's it. 

Monday, June 5, 2017

save username/password while using github



--
You need to set this to store credentials in disk


git config credential.helper store


First time it will ask for credentials and stored in disk...afterwards same will be used.

This is applicable per github repo.

---

You can do the same globally by:

git config --global credential.helper store

cat ~/.gitconfig shoud reflect this.

---

Setting up fresh github repo


Setting up fresh github repo:


1. first visit github and create a user.

2. visit the url  like https://github.com/user_name_here

3. goto "repositories tab" - click on "new"

4. give a name "my_new_repository"


Avoid adding a README using UI. We will do it later from terminal.

5. Now you should be able to access this link:

https://github.com/user_name_here/new_repo.git



Now, open terminal and follow the *sample* workflow to freshly initialize your github repo:


echo "# simple_testing" >>  README.md
 

git init
 

git add README.md
 

git commit -m "first commit"

git remote add origin git@github.com:user_name_here/simple_testing.git


Another format for add: (this gives authentication failure. Avoid using it)

  git remote add origin https://github.com/user_name_here/simple_testing.git


git push -u origin master
 

Here, simple_testing is the "repo" name created.

Friday, June 2, 2017

github - keep your fork in sync



Keep your fork in sync. with original master :



# Now, you are in your local cloned copy of original GitHub repo:

git checkout master


You wish to update master to be in sync with original GitHub repo.



# First, add github location  as "remote"  (1)  - here *upstream* is the name provided by us:

git remote add upstream  https://github.com/original-repo-from-where-you-cloned.git



# fetch all branches - see we are using name provided above :)  (2)

git fetch upstream


# apply all changes from original github location to your branch and then play your changes on top

(3)

git rebase upstream/master

Now your fork is in sync with original repo :) - but only locally. You need to push the changes to get in sync.


// Check status
git status
On branch master
Your branch is ahead of 'origin/master' by 15 commits.
  (use "git push" to publish your local commits)
nothing to commit, working directory clean

// Push all changes you pulled from origin/master
git push

// Now check status again
git status
On branch master
Your branch is up-to-date with 'origin/master'.
nothing to commit, working directory clean


You may check in GitHub UI, to check whether all changes are in.


=========================================================
Notes:

(1)  git-remote  add

Adds a remote named <name> for the repository at <url>. The command git fetch <name> can then be used to create and update remote-tracking branches <name>/<branch>.

(2) git-fetch - Download objects and refs from another repository

(3)  git-rebase - Reapply commits on top of another base tip

===============================================================
How to get your local test branch to be in sync with master ? 

So, lets say now your master branch is sync with *upstream* branch.

You want to make all changes you made to master to your localbranch (say testcodebranch).

So, do this:

# git co testcodebranch
# git rebase master

This way testcodebranch will be in sync with master.
 
 It will contain all the commits from the master branch + additional commit from you will be at the TOP.

===================================================================

// To apply a simple patch
git apply <path to filename.patch>


// to apply a git formatted patch - containing git header.
git am <path_to_filename.patch>

===================================================================


arp program in linux


arp program is used to read( and do more) arp cache, which is maintaining a table to ip address and corresponding mac address.

In order to get arp program, you need to install net-tools.

What other binaries provided by net-tools?

you can make use "dnf --list repoquery net-tools" command:

output :
# dnf repoquery --list net-tools

/usr/bin/netstat
/usr/lib/systemd/system/arp-ethers.service
/usr/sbin/arp
/usr/sbin/ether-wake
/usr/sbin/ifconfig
/usr/sbin/ipmaddr
/usr/sbin/iptunnel
/usr/sbin/mii-diag
/usr/sbin/mii-tool
/usr/sbin/nameif
/usr/sbin/plipconfig
/usr/sbin/route
/usr/sbin/slattach

/usr/share/licenses/net-tools
/usr/share/licenses/net-tools/COPYING
/usr/share/locale/cs/LC_MESSAGES/net-tools.mo
/usr/share/locale/de/LC_MESSAGES/net-tools.mo
/usr/share/locale/et_EE/LC_MESSAGES/net-tools.mo
/usr/share/locale/fr/LC_MESSAGES/net-tools.mo
/usr/share/locale/pt_BR/LC_MESSAGES/net-tools.mo
/usr/share/man/de/man5/ethers.5.gz
/usr/share/man/de/man8/arp.8.gz
/usr/share/man/de/man8/ifconfig.8.gz
/usr/share/man/de/man8/netstat.8.gz
/usr/share/man/de/man8/plipconfig.8.gz
/usr/share/man/de/man8/rarp.8.gz
/usr/share/man/de/man8/route.8.gz
/usr/share/man/de/man8/slattach.8.gz
/usr/share/man/fr/man5/ethers.5.gz
/usr/share/man/fr/man8/arp.8.gz
/usr/share/man/fr/man8/ifconfig.8.gz
/usr/share/man/fr/man8/netstat.8.gz
/usr/share/man/fr/man8/plipconfig.8.gz
/usr/share/man/fr/man8/rarp.8.gz
/usr/share/man/fr/man8/route.8.gz
/usr/share/man/fr/man8/slattach.8.gz
/usr/share/man/man5/ethers.5.gz
/usr/share/man/man8/arp.8.gz
/usr/share/man/man8/ether-wake.8.gz
/usr/share/man/man8/ifconfig.8.gz
/usr/share/man/man8/ipmaddr.8.gz
/usr/share/man/man8/iptunnel.8.gz
/usr/share/man/man8/mii-diag.8.gz
/usr/share/man/man8/mii-tool.8.gz
/usr/share/man/man8/nameif.8.gz
/usr/share/man/man8/netstat.8.gz
/usr/share/man/man8/plipconfig.8.gz
/usr/share/man/man8/rarp.8.gz
/usr/share/man/man8/route.8.gz
/usr/share/man/man8/slattach.8.gz
/usr/share/man/pt/man8/arp.8.gz
/usr/share/man/pt/man8/ifconfig.8.gz
/usr/share/man/pt/man8/netstat.8.gz
/usr/share/man/pt/man8/rarp.8.gz
/usr/share/man/pt/man8/route.8.gz

Sunday, May 28, 2017

gvim failed during git commit


I was using gvim as editor.

Inspite of using proper commit message and quitting gvim properly ( top question in stackoverflow :)),
it failed with message "Aborting commit due to empty commit message."

$ git commit
Aborting commit due to empty commit message.


All you need to do is :

git config core.editor "gvim -f"

Then, try git commit , it should work.



Why it failed ?

Looking at man gvim:

       -f          Foreground.  This option should be used when Vim is executed by a program that
                   will wait for the edit session to finish (e.g. mail).

Friday, May 26, 2017

tar/untar vs copy




tar is faster in most cases when compared to copy:


// directory copy
# tar cf - directory_to_copy/  |  tar xfp -  -C  /myowntarget/

// few files copy
# tar cf - file1 file2 file3   |  tar xfp -  -C  /myowntarget/

// copy all
# tar cf  - *   |  tar xfp -  -C  /myowntarget/ 


- in first tar is "stdout"  which is fed as input to "|"  pipe , extracted again by tar where target directory specified by -C


Some interesting discussion here:

https://superuser.com/questions/788502/why-is-tartar-so-much-faster-than-cp
https://stackoverflow.com/questions/316078/interesting-usage-of-tar-but-what-is-happening


Thursday, May 25, 2017

firewalld - query and open port

-------------------------
Open port 8443/tcp  in firewalld :

// first query
[sarumuga@gant ]$ sudo  firewall-cmd --permanent   --query-port=8443/tcp
no

// add port
[sarumuga@gant ]$ sudo  firewall-cmd --permanent   --add-port=8443/tcp  
success

// verify
[sarumuga@gant ]$ sudo  firewall-cmd --permanent   --query-port=8443/tcp
yes
-------------------------
Open port 53/udp   in firewalld :

// first query
[sarumuga@gant ]$ sudo  firewall-cmd --permanent   --query-port=53/udp
no

// add port
[sarumuga@gant ]$ sudo  firewall-cmd --permanent   --add-port=53/udp
success

//verify
[sarumuga@gant ]$ sudo  firewall-cmd --permanent   --query-port=53/udp
yes

-------------------------

So, for making changes immediate and for future you need to execute two commands :

// immediate - run time
firewall-cmd --add-port=443/tcp

// for future too
firewall-cmd --permanent --add-port=443/tcp

source: http://www.firewalld.org/documentation/man-pages/firewall-cmd.html

Wednesday, May 24, 2017

user with sudo access without password





--------------------------------------------

Often times, I wish to carry out privileged operations while logged in as a user.

You can this line in /etc/sudoers, to avoid password every time.


<username_here>   ALL=(ALL)    NOPASSWD: ALL

I usually add it below this line:

## Same thing without a password
# %wheel    ALL=(ALL)    NOPASSWD: ALL
<username_here>   ALL=(ALL)    NOPASSWD: ALL

--------------------------------------------

You can try commands like this:
#sudo vgs

You can be root simply by:
#sudo su


--------------------------------------------

Thursday, May 18, 2017

statically typed vs dynamically typed language

Statically typed languages 'type check' at compile time and the type can NOT change. (Don't get cute with type-casting comments, a new variable/reference is created).
Dynamically typed languages type-check at run-time and the type of a variable CAN be changed at run-time.


Python, bash - dynamically typed language
                      - interpretable

c, c++,go  - statically  typed language
                  - compilable

source: http://stackoverflow.com/questions/1517582/what-is-the-difference-between-statically-typed-and-dynamically-typed-languages

Tuesday, May 16, 2017

S2I in OpenShift (Kubernetes) for building Docker container image


This post is about S2I which is source to image process to build application container images for OpenShift.


About S2I :


Source-to-Image (S2I) is a framework that makes it "easy to write images" that take application source code as an input and produce a new image that runs the assembled application as output.

so, input -> application source cdoe
    output -> image


Two basic concepts:

1. the build process

2. S2I scripts.


Build process:


During the build process, S2I must place sources and scripts inside the builder image.

So, what is a Builder image here?
 - is one which is going to build the application source. So, it should contains the bits necessary to build the application.

  For example, for building python based application all necessary python libs.


S2I creates a tar file that contains the sources and scripts, then "streams" that file into the builder image.

source + scripts ==========> tar =======> builder image ===========> container image.
                       (compiled into)          (fed to)                                produces)


untar of tar file into default directory /tmp. ( can be modified with --destination flag)

tar + sh is necessary to carry out above operation.
If tar +sh is NOT available, additional container build is required to put both source and script inside the image and then usual s2i build procedure.

After untar, assemble script is executed.


S2I scripts:


assemble
   - builds the application artifacts from a source and places them into appropriate directories inside the image.

run
  - executes your application

save-artifacts(optional)
    - gathers all dependencies that can speed up build processes that follow.
      // for ruby, gems installed, for java m2 contents.

usage (optional)
 - inform how to properly use your image

test/run (optional)
     create a simple process to check if image is running properly.


Creating S2I builder image:


s2i tool -> creating builder images.


builder image contains specific intelligence required to produce that executable image(aka build artifacts).


simple work flow:
 1. download s2i scripts( or use one from inside builder image)
 2. download application source.
 3. s2i streams the scripts and application sources into the builder image container.
 4. it runs the assembler script, which is defined in the builder image.
 5. save the final image.


Builder image -> responsible for actually building the application. (so it has to contain necessary libraries and tools need to build and run the application).


it needs script log to actually perform build and run operations.

 - assemble for build of application
 - run for running of application


// for bootstrapping a new s2i enabled image repo.
// generates skeleton .s2i directory and populate it with sample s2i scripts (which you can start hacking on).

s2i create <image name> <destination directory>

Example:
// Here lighttpd-centos7 *future builder image name*
// s2i-lighttpd is directory created
s2i create lighttpd-centos7 s2i-lighttpd


// build test-app using lighttpd-centos7 as builder image , output image is lighttpd-centos7-app
s2i build test/test-app lighttpd-centos7 lighttpd-centos7-app


Building application image using builder image:


// build a application image using builder image

$ s2i build https://github.com/openshift/django-ex centos/python-35-centos7 hello-python

Here:
source - https://github.com/openshift/django-ex
build image -  centos/python-35-centos7 // this should be present either locally / at docker hub.
output tagged image - hello-python


// You can run the built image as below :
$ docker run -p 8080:8080 hello-python


You can verify the application by using weburl http://localhost:8080  

So, S2I helps to create your docker image just from your github link :) 




Wednesday, May 10, 2017

oc types - Kubernetes / OpenShift concepts


All the below info. is available in your command line.
All you need to do is try oc types command :)


Concepts and Types

Kubernetes and OpenShift help developers and operators build, test, and deploy applications in a containerized cloud environment. Applications may be composed of all of the components below, although most developers will be concerned with Services, Deployments, and Builds for delivering changes.

Concepts:

* Containers:
    A definition of how to run one or more processes inside of a portable Linux
    environment. Containers are started from an Image and are usually isolated
    from other containers on the same machine.
 
* Image:
    A layered Linux filesystem that contains application code, dependencies,
    and any supporting operating system libraries. An image is identified by
    a name that can be local to the current cluster or point to a remote Docker
    registry (a storage server for images).
 
* Pods [pod]:
    A set of one or more containers that are deployed onto a Node together and
    share a unique IP and Volumes (persistent storage). Pods also define the
    security and runtime policy for each container.
 
* Labels:
    Labels are key value pairs that can be assigned to any resource in the
    system for grouping and selection. Many resources use labels to identify
    sets of other resources.
 
* Volumes:
    Containers are not persistent by default - on restart their contents are
    cleared. Volumes are mounted filesystems available to Pods and their
    containers which may be backed by a number of host-local or network
    attached storage endpoints. The simplest volume type is EmptyDir, which
    is a temporary directory on a single machine. Administrators may also
    allow you to request a Persistent Volume that is automatically attached
    to your pods.
 
* Nodes [node]:
    Machines set up in the cluster to run containers. Usually managed
    by administrators and not by end users.
 
* Services [svc]:
    A name representing a set of pods (or external servers) that are
    accessed by other pods. The service gets an IP and a DNS name, and can be
    exposed externally to the cluster via a port or a Route. It's also easy
    to consume services from pods because an environment variable with the
    name _HOST is automatically injected into other pods.
 
* Routes [route]:
    A route is an external DNS entry (either a top level domain or a
    dynamically allocated name) that is created to point to a service so that
    it can be accessed outside the cluster. The administrator may configure
    one or more Routers to handle those routes, typically through an Apache
    or HAProxy load balancer / proxy.
 
* Replication Controllers [rc]:
    A replication controller maintains a specific number of pods based on a
    template that match a set of labels. If pods are deleted (because the
    node they run on is taken out of service) the controller creates a new
    copy of that pod. A replication controller is most commonly used to
    represent a single deployment of part of an application based on a
    built image.
 
* Deployment Configuration [dc]:
    Defines the template for a pod and manages deploying new images or
    configuration changes whenever those change. A single deployment
    configuration is usually analogous to a single micro-service. Can support
    many different deployment patterns, including full restart, customizable
    rolling updates, and fully custom behaviors, as well as pre- and post-
    hooks. Each deployment is represented as a replication controller.
 
* Build Configuration [bc]:
    Contains a description of how to build source code and a base image into a
    new image - the primary method for delivering changes to your application.
    Builds can be source based and use builder images for common languages like
    Java, PHP, Ruby, or Python, or be Docker based and create builds from a
    Dockerfile. Each build configuration has web-hooks and can be triggered
    automatically by changes to their base images.
 
* Builds [build]:
    Builds create a new image from source code, other images, Dockerfiles, or
    binary input. A build is run inside of a container and has the same
    restrictions normal pods have. A build usually results in an image pushed
    to a Docker registry, but you can also choose to run a post-build test that
    does not push an image.
 
* Image Streams and Image Stream Tags [is,istag]:
    An image stream groups sets of related images under tags - analogous to a
    branch in a source code repository. Each image stream may have one or
    more tags (the default tag is called "latest") and those tags may point
    at external Docker registries, at other tags in the same stream, or be
    controlled to directly point at known images. In addition, images can be
    pushed to an image stream tag directly via the integrated Docker
    registry.
 
* Secrets [secret]:
    The secret resource can hold text or binary secrets for delivery into
    your pods. By default, every container is given a single secret which
    contains a token for accessing the API (with limited privileges) at
    /var/run/secrets/kubernetes.io/serviceaccount. You can create new
    secrets and mount them in your own pods, as well as reference secrets
    from builds (for connecting to remote servers) or use them to import
    remote images into an image stream.
 
* Projects [project]:
    All of the above resources (except Nodes) exist inside of a project.
    Projects have a list of members and their roles, like viewer, editor,
    or admin, as well as a set of security controls on the running pods, and
    limits on how many resources the project can use. The names of each
    resource are unique within a project. Developers may request projects
    be created, but administrators control the resources allocated to
    projects.
 
For more, see https://docs.openshift.com

Usage:
  oc types [options]

Examples:
  # View all projects you have access to
  oc get projects

  # See a list of all services in the current project
  oc get svc

  # Describe a deployment configuration in detail
  oc describe dc mydeploymentconfig

  # Show the images tagged into an image stream
  oc describe is ruby-centos7

Use "oc options" for a list of global command-line options (applies to all commands).

screen capture for demo recordmydesktop


Screen capture (video) for demo purpose in Linux world:

gtk-recordMyDesktop

You can launch GUI and use it.


If you are not interested in sound,  use ( in cli mode):
# recordmydesktop --no-sound

When you want to stop recording using + C

By default, it records in ogv format. You can directly upload this video to youtube.

I faced some issues and it creates log file in home directory namely, gtk-recordMyDesktop-crash.log
Check it out for troubleshooting.



If you wish to cut portion of video, you can make use of ffmpeg.


ffmpeg -i in.ogv -ss 00:01:05.0 -c copy -t 00:00:5.0  out.ogv

-ss -> start time

-t -> duration


This cuts video from 1 hour 5 mins of time duration 5 minutes. 

time format hh:mins:secs

Wednesday, May 3, 2017

Jenkins and related terminology


Although I have used Jenkins as a consumer, I don't have much idea about terminologies used there(pipeline, artifact, Build, etc)

I started looking into Jenkins more using this document. ( https://jenkins.io/user-handbook.pdf )

Installation - Follow steps from this:

To start the service:
systemctl start jenkins

Now, you can access Jenkins from browser using :

For a sample pipeline, follow steps from this: 

so, that's it. 

There seems to so much info. in Jenkins, but I restricted myself to understanding the main terminologies used.

Please refer the documentation for more  ðŸ˜Ž

This gets me few things clarified:

  • Artifact:
Immutable file created during pipeline/Build.

  • Build:
Result of single execution of the project.

  • Pipeline:
User-defined model of a continuous delivery pipeline.
A suite of plugins which support implementing, integrating continuous delivery pipelines into Jenkins.
     Pipeline as a code. -> Jenkinsfile. -> project source code.