Monday, December 24, 2018

Permission denied error during "git push"


I was trying to push a new commit a brand new repo.
But faced this error:

remote: Permission to /  denied to .
fatal: unable to access '': The requested URL returned error: 403

The problem here , I cloned a read-only repo.


ok, now, try
git config -l

Check  remote.origin.url  in the above output.



Now, Change it something like
git remote set-url origin https://USERNAME@github.com/USERNAME/YOURREPO

So, we are making it read-write.



Now, carry out the "git push". it should work.




Sunday, December 16, 2018

Creating new objects from existing objects in OpenShift


---------
To create a new object from a existing object make use of
# oc get --export    -o yaml 

you can create new template out of this and can be used to create new objects.


Using "export " removes unnecessary info. from the template.

---------

To simply view a resource you can use

# oc get -o yaml

---------

failed_when directive in ansible



failed_when directive in ansible:

1.
You can mention something specific in failed_when directive which mentions what output(stdout) causes the failure.

Or

2.
you can altogether ignore the failure using failed_when set as "False".

============================================

1.
While creating users you can use something like:

failed_when: "'Password missing' in result.stdout"




2.
failed_when: False

Even if the command fails, do not exit and proceed further.


p1.yml
===================
---
- hosts: webservers
  tasks:
  - name: install tftp
    yum: pkg=tftp state=installed
    failed_when: False

===================


# cat inventory_simple
[webservers]
gant



#ansible-playbook -i inventory_simple       -s  p1.yml


So, even if tftp installation fails(for some reason) proceed further. Do not error out.
--

When the rc value is not 0 not 1 and something else, then fail 
    failed_when: ( results.rc not in [ 0, 1 ] )
--


Thursday, November 22, 2018

github - get patch file from PR



So, you have a PR like this:
https://github.com/openshift/openshift-ansible/pull/10077


To get the corresponding patch, just add .patch to the PR number.

https://github.com/openshift/openshift-ansible/pull/10077.patch




https://owenou.com/ten-things-you-didnt-know-git-and-github-could-do


Wednesday, November 21, 2018

Get IP address of the VM from the host



Get IP address of the VM from the host

First:
# virsh net-list
< list of networks>


Next:
# virsh net-dhcp-leases   

Expiry Time          MAC address        Protocol  IP address                Hostname        Client ID or DUID
< get hostname and the corresponding IP address here  >



Monday, November 19, 2018

interesting challenges for fun :)


interesting challenges to try for 30 days :) :)

http://hackerella.com/30-day-challenge-ideas/






file command in linux


file command

file command can be used to determine filetype.

For example.

# file file.txt
file.txt: ASCII text

# file testfile.pdf

testfile.pdf PDF document, version 1.5


You can also check disk files.: ( you need root permission though)

file -s  /dev/sda1 

$ sudo file -s /dev/sda1
/dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=ABCD-ABCD-XYZ (needs journal recovery) (extents) (large files) (huge files)



 

systemctl list units




systemctl list-units

Show only the failed units:

systemctl list-units   --state=failed





Sunday, November 18, 2018

Running heketi commands in Kubernetes / OpenShift


heketi is the REST API client/server to manage glusterfs volumes.

https://github.com/heketi/heketi


Before, running heketi commands you need to export three variables.

HEKETI_CLI_SERVER
HEKETI_CLI_USER
HEKETI_CLI_KEY


1.
oc get route


use the above url with http:// added in the front.

#export HEKETI_CLI_SERVER=http://url above


2.
username configured

# export HEKETI_CLI_KEY=admin

3.

Get the admin key from the command
# oc get pods heketi-cns-1-sgwwg -o yaml | grep HEKETI_ADMIN_KEY -A1
    - name: HEKETI_ADMIN_KEY
      value: admin

# export HEKETI_CLI_USER=admin

Note: use kubectl in case of kubernetes.  example shows oc which is for openshift.
---------------------------


Now, you can verify the heketi commands as:

# heketi-cli cluster list


Friday, November 9, 2018

git - check source code of a file for a range of commits



See history for two lines in a file

 git log --pretty=short -u -L 6,7:Dockerfile


here, 6,7 represents the line no. Adjust according to your needs.

git version should be 1.8.4  or later .


originally from : https://stackoverflow.com/questions/8435343/retrieve-the-commit-log-for-a-specific-line-in-a-file/31985012

Installing git from source



While trying to install git from source faced this error:
Can't locate ExtUtils/MakeMaker.pm


You can fix it by installing :

yum install perl-ExtUtils-MakeMaker



Note1:
for installing git followed this link:
Source: https://git-scm.com/book/eo/v1/Ekkomenci-Installing-Git ;

In general, avoid installing from source. 
Try to install using repo. so that dependencies are taken care automatically.



Monday, November 5, 2018

Writing a simple systemd script

cd /usr/lib/systemd/system/
edit a file named simple.service with following contents:
===========================
[Unit]
Description=Simple Service
After=network.target

[Service]
Type=simple
ExecStart=/usr/bin/test
Restart=on-abort

[Install]
WantedBy=multi-user.target
===========================

Here, /usr/bin/test is the example program.

Now, check the status by default
$ systemctl status simple

● simple.service - Simple Service
   Loaded: loaded (/usr/lib/systemd/system/simple.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

Start the service
$ sudo systemctl start simple

Now, check the status:
$ sudo systemctl status simple
output :
● simple.service - Simple Service
   Loaded: loaded (/usr/lib/systemd/system/simple.service; disabled; vendor preset: disabled)
   Active: active (running) since
 Main PID: 23791 (test)
   CGroup: /system.slice/simple.service
           └─23791 /usr/bin/test
systemd[1]: Started Simple Service.
systemd[1]: Starting Simple Service...

To stop the service:
$ sudo systemctl stop simple

To enable the service by default
sudo systemctl enable simple

Created symlink from /etc/systemd/system/multi-user.target.wants/simple.service to /usr/lib/systemd/system/simple.service.

$ ls -l /etc/systemd/system/multi-user.target.wants/simple.service

lrwxrwxrwx 1 root root  /etc/systemd/system/multi-user.target.wants/simple.service -> /usr/lib/systemd/system/simple.service

To get more idea about different keywords used in systemd script, refer man 5 systemd.service

For example in the above one following keywords are used:

ExecStart = command executed when service is started.
After = after the specific service started.
Restart= when to restart the service.
WantedBy= which run level this is required.

Wednesday, October 10, 2018

Sunday, October 7, 2018

add calendar event from google search



Do a google search like this and see the magic:

October 10th @ 6am add calendar


of course, you should be logged into your account before adding :)


Wednesday, August 29, 2018

Creating a loop device (for testing)

==============================================
// create the backing file
# dd if=/dev/zero of=loopbackfile.img bs=100M count=10
10+0 records in
10+0 records out
1048576000 bytes (1.0 GB) copied, 1.26748 s, 827 MB/s

# du -sh loopbackfile.img
1000M loopbackfile.img

// get free loop device and associate with backing file
// -f free   ||  -P partscan to force kernel scan partition table
# losetup --show -fP loopbackfile.img

// show the devices
# losetup -a

// here loopX is the device associated - check with above step
# mkfs.xfs /dev/loopX


# mkdir -p /testdata

// do mount
# mount -o loop /dev/loop0 /testdata


// verify
# df -hP /loopfs/

# mount | grep testdata

==============================================



Saturday, August 25, 2018

Tuesday, August 21, 2018

Ansible limiting to hosts/groups



Ansible limiting the playbook execution to few hosts / groups.


Limit to one host

ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1"
Limit to multiple hosts

ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1,host2"
All but one host
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'all:!host1'
Limit to a group
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'group1'
All but one group
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'all:!group1'

source:
https://ansible-tips-and-tricks.readthedocs.io/en/latest/ansible/commands/#limit-to-one-or-more-hosts

Friday, August 17, 2018

Better Battery management in Laptop



For better battery management in Laptops, use TLP.

yum install tlp -y


To start:
systemctl start tlp

To check status:
systemctl status tlp


Configuration here:
/etc/default/tlp


Ref: https://linrunner.de/en/tlp/docs/tlp-linux-advanced-power-management.html



Wednesday, August 8, 2018

git diff - some additional options



git diff with word level precision :)


// note there is a dot at the end
# git diff  --word-diff-regex=.


// git diff , avoiding lines around actual diff and showing only the diff.
# git diff -U0


more will be updated..





Tuesday, August 7, 2018

untar xz extension tar file






.tar.gz or .tgz extension -> z flag

-> tar zxvf <tar file. tgz>

-> gzip used here




.xz extension -> J flag

-> tar Jxvf <tar file . xz>

-> xz utils used here

Monday, August 6, 2018

Update a running docker container and create a new image


Sometimes during development, you may want to update few files inside docker container - and wish to create a new image out of it.

You can make use of docker commit.

first get the docker id of the images you wish to save. using docker ps

# docker ps

02738dcd4cc2


Then, make changes inside the docker container (02738dcd4cc2) .

For example,

docker exec -it 02738dcd4cc2 bash
# touch /root/testfile 
# edit some configuration


Now, exit the docker container (02738dcd4cc2) and save a new image.
# docker commit 02738dcd4cc2 <updated_docker_container_name_you_have_given>


Now, you can run this  <updated_docker_container_name_you_have_given>, to see the changes.


Use this option judiciously.

Ideally, you should always make use of Dockerfile to make changes and test :)

Wednesday, August 1, 2018

How to get package dependencies of a specific package



How to get package dependencies of a specific package ?


// so check the package exists
$ rpm -qa | grep vim-X11
vim-X11-8.0.600-1.fc24.x86_64


// Now, get all dependencies
$ rpm -q --requires  vim-X11
/bin/sh
/bin/sh
/bin/sh
gtk3
hicolor-icon-theme
libICE.so.6()(64bit)
libSM.so.6()(64bit)
libX11.so.6()(64bit)
libXt.so.6()(64bit)
libacl.so.1()(64bit)
libacl.so.1(ACL_1.0)(64bit)
libattr >= 2.4
libc.so.6()(64bit)
libc.so.6(GLIBC_2.11)(64bit)
libc.so.6(GLIBC_2.14)(64bit)
libc.so.6(GLIBC_2.15)(64bit)
libc.so.6(GLIBC_2.2.5)(64bit)
libc.so.6(GLIBC_2.3)(64bit)
libc.so.6(GLIBC_2.3.4)(64bit)
libc.so.6(GLIBC_2.4)(64bit)
libcairo.so.2()(64bit)
libdl.so.2()(64bit)
libdl.so.2(GLIBC_2.2.5)(64bit)
libgdk-3.so.0()(64bit)
libgdk_pixbuf-2.0.so.0()(64bit)
libgio-2.0.so.0()(64bit)
libglib-2.0.so.0()(64bit)
libgobject-2.0.so.0()(64bit)
libgpm.so.2()(64bit)
libgtk-3.so.0()(64bit)
libm.so.6()(64bit)
libm.so.6(GLIBC_2.2.5)(64bit)
libpango-1.0.so.0()(64bit)
libpangocairo-1.0.so.0()(64bit)
libpthread.so.0()(64bit)
libpthread.so.0(GLIBC_2.2.5)(64bit)
libselinux.so.1()(64bit)
libtinfo.so.6()(64bit)
perl(:MODULE_COMPAT_5.22.3)
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(FileDigests) <= 4.6.0-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1
rpmlib(PayloadIsXz) <= 5.2-1
rtld(GNU_HASH)
vim-common = 2:8.0.600-1.fc24

Thursday, July 19, 2018

Extend your logical volume



 // current scenario - get VG, LV and PV information
# vgs;pvs;lvs

// check the free disks available
# cat /proc/partitions

// So , sdc is available free , do a pvcreate
# pvcreate /dev/sdc
  Physical volume "/dev/sdc" successfully created.

// This should show
# lvmdiskscan -l
   /dev/sdc                                                                             [      10.00 GiB] LVM physical volume



// extend the vg which corresponds to the lv - so here rhel_dhcp43_130 is the VG

# vgextend rhel_dhcp43-130 /dev/sdc


// Now extend the lv - here rhel_dhcp43--130-root is the LV

# lvextend -l +100%FREE /dev/mapper/rhel_dhcp43--130-root
  Size of logical volume rhel_dhcp43-130/root changed from 7.00  GiB (1792 extents) to <17.00 GiB (4351 extents).
  Logical volume rhel_dhcp43-130/root successfully resized.


// Check the size  - hmm..size not changed yet :(

# df -h


// resize underlying filesystem

# xfs_growfs -d /

// hooray - size changed :)

# df -h


============================================

Another day. 


--

vgs
lvs

--
// This is make the end size of ubuntu-home 60G.
// earlier it is of larger size say for example 160G

# lvresize -r  -L 60G /dev/fedora_localhost-live/ubuntu-home  
fsck from util-linux 2.38.1
/dev/mapper/fedora_localhost--live-ubuntu--home: 315/10485760 files (2.5% non-contiguous), 939484/41943040 blocks
resize2fs 1.46.5 (30-Dec-2021)
Resizing the filesystem on /dev/mapper/fedora_localhost--live-ubuntu--home to 15728640 (4k) blocks.
The filesystem on /dev/mapper/fedora_localhost--live-ubuntu--home is now 15728640 (4k) blocks long.

  Size of logical volume fedora_localhost-live/ubuntu-home changed from 160.00 GiB (40960 extents) to 60.00 GiB (15360 extents).
  Logical volume fedora_localhost-live/ubuntu-home successfully resized.

--
// check change in lvs and vgs(check free here)
lvs;

vgs;
--

// whatever free we take in the ubuntu-root logical volume

# lvextend -l +100%FREE /dev/mapper/fedora_localhost--live-ubuntu--root  
  Size of logical volume fedora_localhost-live/ubuntu-root changed from 10.00 GiB (2560 extents) to <113.13 GiB (28961 extents).
  Logical volume fedora_localhost-live/ubuntu-root successfully resized.



// mandatory check after lvextend

e2fsck -f   /dev/mapper/fedora_localhost--live-ubuntu--root   
e2fsck 1.46.5 (30-Dec-2021)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/fedora_localhost--live-ubuntu--root: 225489/655360 files (0.1% non-contiguous), 2431025/2621440 blocks
[root@fedora saran]#
[root@fedora saran]# resize2fs  /dev/mapper/fedora_localhost--live-ubuntu--root   

// mandatory for filesystem notification.

resize2fs 1.46.5 (30-Dec-2021)
Resizing the filesystem on /dev/mapper/fedora_localhost--live-ubuntu--root to 29656064 (4k) blocks.
The filesystem on /dev/mapper/fedora_localhost--live-ubuntu--root is now 29656064 (4k) blocks long.

--
vgs
lvs
--
=========================================


Check shell script online



Check your shell script online for any errors:

https://www.shellcheck.net/


Hooray, its GPLv3 :)

Welcome message in your terminal




Install cowsay and get a welcome message(with date/time) everytime you open the terminal


1.
sudo yum install cowsay -y

2.
cowsay "Welcome $USER! It's now $(date '+%A %B %d %Y %r')"
Add the above line in your ~/.bashrc.



Now, open a fresh terminal. 

/ Welcome <user>! It's now Thursday \
\ July 19 2018 12:54:54 PM IST        /
 -------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||


Its fun :) 

Wednesday, July 18, 2018

Cleanup and restart docker-storage-setup


Cleanup and setting up docker-storage-setup

In this example, vdb partition is used.



 vgremove docker-vg

 pvremove /dev/vdb1

 parted /dev/vdb rm 1

 wipefs -af /dev/vdb

 systemctl stop docker

 rm -rf /var/lib/docker

 rm -f   /etc/sysconfig/docker-storage

 docker-storage-setup

 systemctl start docker






Thursday, July 5, 2018

docker image - remove old tag and add new tag



-----------------

# Tag the image as below
# docker tag   e0d6b0ded794   <your_newtag_here>

Where <e0d6b0ded794> is the image id of the image you wish to update.

----------------

#remove old tag:

# docker rmi <your_old_tag_here>

----------------

# Check the image, you should only see the newtag:

# docker images | grep <your_newtag_here>

-----------------

To load the tar'd docker image:
# docker load -- input  


Building and running gluster container

Building and running gluster container:

# clone
git clone https://github.com/gluster/gluster-containers.git

cd gluster-containers/CentOS/

# Build using dockerfile
docker build -t glusterfs .

# Now run
docker run -v /etc/glusterfs:/etc/glusterfs:z -v /var/lib/glusterd:/var/lib/glusterd:z -v /var/log/glusterfs:/var/log/glusterfs:z -v /sys/fs/cgroup:/sys/fs/cgroup:ro -d --privileged=true --net=host -v /dev/:/dev  glusterfs


# Now, you can login to the container and check whether glusterd is running

Also, peer probe to other gluster containers using its IP address.


The above workflow is for building  the contianer from source.

You can also, simply pull the container using:
 # docker pull gluster/gluster-centos

and then start running as mentioned above.

Wednesday, June 27, 2018

Change coredump file location




# verify current setting
sysctl   kernel.core_pattern


You can change default core dump as below:
echo '/var/log/testprogram/core_%e.%p' | sudo tee /proc/sys/kernel/core_pattern

# verify the setting
sysctl   kernel.core_pattern


Now, all the programs if any major faults, will generate coredump in the above directory configured.

core_%e_%p
Here %e - program name , %p - pid of process

Check, man core for more details.


This article discussed about how to compress the core file generated:

https://docs.solace.com/Configuring-and-Managing/SW-Broker-Specific-Config/Docker-Tasks/Config-Core-Dump-Mgmt.htm

tee command vs redirect symbol


Usually I make use of > to redirect to a file.

We can also redirect using tee command.


Something like:

./test | tee test_output.log
So, here instead of > we are using tee.



What is advantage/ difference?

tee in addition to redirection copies the data to STDOUT.

Also, sometimes we want to see only specific lines with a pattern , we can use like:

./runprogram | tee program.log | grep Error
So, in this example, it logs all output , but displays only those lines with "Error".


Ref: man tee 




Friday, June 1, 2018

git add a new branch and push it to remote server




Add a new branch and push it to remote server:

git checkout -b new_branch

git push -u origin  new_branch



Wednesday, May 16, 2018

Check whether all nodes are in sync(timewise)




To check whether time is in sync (ntp) you can make use of ansible.

First fillup your inventory file with all nodes.

Then,

ansible all -m shell -a "date"

all here will contact all the nodes in inventory and get time.


If it varies widely you have an issue. check whether ntp service like
chronyd/ntpd running in the nodes.



Running out of space in /var directory due to docker images



docker images are stored under /var/lib/docker

If you run out of space in /var partition.?

ok move /var/lib/docker directory to /root (considering space is available under /root) and create a symlink from /var/lib/docker to /root/docker.

Ensure you stop docker service before moving and verify docker daemon works fine after creating symlink.

so, all the commands put together:

systemctl stop docker

mv /var/lib/docker /root/

ln -sf  /root/docker /var/lib/docker

systemctl start docker

systemctl status  docker

thats it :) 

git push only the current branch




# This will make git push to push only the current branch

git config --global push.default simple

Tuesday, May 8, 2018

Delete all evicted pods in openshift


Delete all evicted pods in openshift:

Please check the commands thoroughly before executing it.

# This is for the current namespace
# eval "$(oc get pods -o json | jq -r '.items[] | select(.status.phase == "Failed" and .status.reason == "Evicted") | "oc delete pod --namespace " + .metadata.namespace + " " + .metadata.name')"


# This is for all namespaces
# eval "$(oc get pods -o json --all-namespaces | jq -r '.items[] | select(.status.phase == "Failed" and .status.reason == "Evicted") | "oc delete pod --namespace " + .metadata.namespace + " " + .metadata.name')"


Here,  jq - commandline JSON processor
You need to install jq first.


PS: Thanks to Stackoverflow!

Another PS:
As per the reader's comment :
Another way per-namespace without jq.

# for evicted in $(oc get pods | grep "Evicted" | awk '{print $1}'); do oc delete pod ${evicted}; done



Friday, May 4, 2018

Show only stopped docker containers

# show running containers
docker ps


# show running and stopped containers
docker ps -a


# adding q make it quiter :o)
docker ps -q
docker ps -q -a


# Now show only the stopped container
diff <(docker ps -a -q) <(docker ps -q )
 
   This shows difference between the two commands and
in this case the difference is stopped containers.

 




Thursday, May 3, 2018

delete dangling docker images and remove stopped containers

--------
delete dangling docker images :

docker rmi $(sudo docker images --filter "dangling=true" -q --no-trunc)

--------

Delete stopped containers ( and not running containers):

docker rm $(docker ps -a -q)
--------

Delete untagged docker images :

docker rmi `docker images | grep "^<none>" | awk '{print $3}' ` --force

If no force is used, it throws error "image is being used by stopped container"

---------

# To delete a image remotely, and it has s3-apb as its name:


ssh remotee_server 'docker rmi `docker images | grep s3-apb | awk '"'"'{print $3}'"'"' ` --force'

Having double quote with remote command is tricky. it is getting evaluated as part of local shell itself. to avoid you need use like above.

source: https://stackoverflow.com/questions/20498599/how-to-escape-the-single-quote-character-in-an-ssh-remote-bash-command



Friday, April 27, 2018

openshift login issues



I was not able to login to openshift using webconsole/ cli.


Come across this link:

https://docs.openshift.com/container-platform/3.9/install_config/configuring_authentication.html#DenyAllPasswordIdentityProvider

https://docs.openshift.com/container-platform/3.9/install_config/configuring_authentication.html#AllowAllPasswordIdentityProvider


So, from DenyAll change it to AllowAll as mentioned in the above configuration.


(first take backup of master configuration file mentioned there).

Restart all the master services.

Voila - Now you should be able to login :)


Wednesday, April 11, 2018

To add/delete a NAT rule

==========================

# To add a rule

// you are listening on a server @127.0.0.1 and want to expose to ouside.
// you can add a rule as below.
// now anyone accessing YOUR_NODE_IP_ADDRESS:8443 will be redirected to 127.0.0.1:8443
iptables -t nat -A PREROUTING -p tcp --dport 8443 -j DNAT --to-destination 127.0.0.1:8443

==========================
# To delete a RULE

// This will list according to groups like PREROUTING , INPUT, OUTPUT, POSTROUTING , etc.,
iptables -t nat -L --line-numbers

// to delete a specific rule
iptables -t nat -D PREROUTING <number>

// For example, here to delete third rule
iptables -t nat -D PREROUTING 3

==========================

Monday, April 9, 2018

force delete a pod in openshift (kubernetes)


For kubernetes:

kubectl delete pod --grace-period=0 --force --namespace <NAMESPACE> <PODNAME>


and for openshift

oc delete pod --grace-period=0 --force --namespace <NAMESPACE> <PODNAME>


output: 
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.


This command may be use for deleting pods whose status is Unknown.
As the output says, the pod may still be running. 

--

To delete all pods in a namespace:
//In this example namespace in 'default'
kubectl delete pods --all  -n  default

To delete all svc in a namespace:
//In this example namespace in 'default'
kubectl delete svc --all  -n  default

--
 

Thursday, April 5, 2018

label a node in openshift


How to label a node in openshift as 'infra' ? 


====================================================================
# oc describe node dhcp41-180.lab.eng.test.mydomain.com

..
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    glusterfs=storage-host
                    kubernetes.io/hostname=dhcp41-180.lab.eng.test.mydomain.com
                    region=primary
                    role=node
..
====================================================================

# oc label node dhcp41-180.lab.eng.test.mydomain.com region=infra 
error: 'region' already has a value (primary), and --overwrite is false

# oc label node dhcp41-180.lab.eng.test.mydomain.com region=infra   --overwrite
node "dhcp41-180.lab.eng.test.mydomain.com" labeled

====================================================================
# oc describe node dhcp41-180.lab.eng.test.mydomain.com
..
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    glusterfs=storage-host
                    kubernetes.io/hostname=dhcp41-180.lab.eng.test.mydomain.com
                    region=infra
                    role=node
..
====================================================================

Tuesday, March 20, 2018

Artificial intelligence and Machine learning



Attended a meetup on AI and Machine learning conducted by Harinder.

https://www.meetup.com/All-About-Design-Patterns/events/248506302/


Short notes on same:

AI -> artificially created intelligence

Machine learning is an *important* approach to AI. (There are many approaches out there).

--
Some applications:

Why Candy crush was so famous ?

       According to the player's habits, the game adjusts itself to ensure that the player stays for long.

Facebook, Google maps etc., makes use of AI.

recaptcha -> use this to crowsource old book's scanning.

youtube suggestions

Spam  detection

Deep learning

Natural language processing.

---
John Mccarthy coined the term AI.  1959.

https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)


many developments followed especially games played by AI.

1997 -> Deepblue played chess
2005 -> self driving car
2011 -> watson jeopardy
2016 -> alphago game

Don't take games lightly :)

----------
2005 -> 135 Exabyte human data
Now -> 100,000 exabytes !!
(1 Exabyte -> 1000 PB -> 1PB=1000 TB -> 1TB=1000 GB)

So much of DATA and so  much of computing power, hence good interest now.
------------

Given x and y whose values are linear , AI can figure out
 y = mx+c
// slope and interceptor

Here, y=mx+c is the model figured out by AI.
It can use the model to get output for any input.

what if scatter plot ?

    make use of squares to get smaller numbers.

Fit the DATA -> important term

Fit the DATA to figure out the MODEL.


machine learning -> supervised
                             -> unsupervised

                              supervised -> regression
                                                 -> classification


supervised -> labelled.
unsupervised -> unlabeled.

Regression -> continuous output
classification -> well defined classes

Machine learning:

1. Fit

                  data -> learning -> model

eg: model -> y = mx+c
 
2. Predict
                 testdata -> model -> New value (prediction)


So, out of 30 test data

Use 20 test data get the model.

input 10 test data -> use model and get output
compare with actual output.

helps to know whether algorithm is good / bad.

model -> forumla / rules.

test data =================
                                                            -> test data accuracy
actual output   -----------------------


train data  =================
                                                               -> train data accuracy
actual output  -----------------------

(test data is part of train data)

Avoid over fitting (100% data accuracy).
(Remembered) data


4 important points in ML:

1. good data - data needs to be cleaned.
2. Algorithm
3. Features.
4. Parameter tuning.


AI framework :

https://www.anaconda.com/download

Machine learning in python:

http://scikit-learn.org/stable/







Friday, March 16, 2018

Check whether rpm signed or not


How to check whether rpm signed or not ?


rpm -K <your rpm file>

If signed, sha1 md5 OK

If not signed, md5 NOT OK (MISSING KEYS: XYZ.. )

Friday, March 9, 2018

install vagrant-libvirt


To install vagrant-libvirt:

vagrant plugin install vagrant-libvirt


Before the above command, need to run:

yum install libxslt-devel libxml2-devel libvirt-devel \  libguestfs-tools-c ruby-devel gcc -y

Thursday, March 8, 2018

OpenShift - select resources based on labels



Openshift :

Select resources based on labels:

oc get all --selector="testlabel_whateveryouhave"



Disable a specific node from scheduling in openshift



How to disable a specific node from scheduling in openshift?


root@dhcp43-231 ~]# oadm manage-node dhcp43-231  --schedulable=false
DEPRECATED: The 'oadm' command is deprecated, please use 'oc adm' instead.
NAME                                STATUS                     ROLES     AGE       VERSION
dhcp43-231   Ready,SchedulingDisabled   master    10d       v1.9.1+a0ce1bc657

[root@dhcp43-231 ~]# oc get nodes 
NAME                                STATUS                     ROLES     AGE       VERSION

dhcp43-231   Ready,SchedulingDisabled   master    10d       v1.9.1+a0ce1bc657


source: https://docs.openshift.org/latest/admin_guide/manage_nodes.html

Wednesday, March 7, 2018

checkout specific branch - github





git fetch upstream pull/<ID>/head:BRANCHNAME

// Where, ID the PR number


git checkout BRANCHNAME


git push upstream BRANCHNAME
// push the branch


Wednesday, February 28, 2018

Tuesday, February 27, 2018

automount nfs share on boot




You just mounted nfs share in a directory.

Now you wish to make this permanent by updating etc/fstab.


You can do this:

 cat /proc/mounts   | tail -1 >> /etc/fstab

Here, we update last entry into fstab.

check /etc/fstab and adjust according to your needs.


Sunday, February 11, 2018

Solution for Box not found error in Vagrant



I was facing this error:

The box 'centos/7' could not be found or
could not be accessed in the remote catalog. If this is a private
box on HashiCorp's Atlas, please verify you're logged in via
`vagrant login`. Also, please double-check the name. The expanded
URL and error message are shown below:

URL: ["https://atlas.hashicorp.com/centos/7"]
Error: The requested URL returned error: 404 Not Found



Adding this line in Vagrantfile helped:

++Vagrant::DEFAULT_SERVER_URL.replace('https://vagrantcloud.com')
Vagrant.configure("2") do |config|


source: https://github.com/hashicorp/vagrant/issues/9442#issuecomment-363080565

Wednesday, February 7, 2018

multiple commits in a PR




I ended up with multiple commits in my PR after a rebase (and followup push) in my "working" branch.


Solution:

I first copied those few files which I have modified/ added into a temp directory (along with entire path).

then reset my branch with origin/master.

git reset --hard origin/master

So, now the "working" branch is in sync with master branch.



Now, copy back all those files from temp directory into the "working" branch.

Carried out a fresh commit with all changes and verified all changes looks fine.


Then, just did a force push as :

git push --force

Now, you can check in github whether only one commit with required changes present.

Lesson:
Always verify git status before a push especially after a rebase.

PS: not an optimal solution, but manual way.