Infinity

A hacker's Thinking

ChatOps: When Communication Matters

| Comments

Hi All,

I am assuming, everyone here have some knowledge about DevOps. Basically:

  • What is DevOps?
  • How people do DevOps? (not necessary)

Here is new era of DevOps is started, means all the DevOps work can be done over the chat. This is called as a ChatOps. Mostly DevOp do all the infrastructure related work over a shell. ChatOps focuses to create/maintain infrastructure over the chat (can be IRC, Slack, GitHub etc) .

Why ChatOps? - Really .. ?? Communication matters.

We are working in agile teams which means much control and flexibility in the teams. And, ChatOps really provides all these things. Suppose, you deploy your infrastructure from Slack, and everyone (BAs, QAs, DEVs etc) knows about the deployment and its status within the Slack and everyone will have context, how the deployment works for the project.

Now even the non-technical persons who doesn’t know about anything of DevOps can deploy the services over a chat and everyone will come to know how and what happened for that deployment. In case of failures, everyone will know.

  • Hey.. Deployment failed..!!
  • We know it. Tell something new.

Really wants real fun?

Its time to explore ChatOps. I am using these things for my ChatOps:

  • Coffee Script
  • Bash
  • Ansible
  • Python

Using all these things, we can really built an awesome infrastructure over the chat. We will see all these things step by step.

Part 1: The Hubot

Hubot is a programmatic bot, you can program this according to your needs. Basic template of hubot can get from GitHub repo. It is written in CoffeeScript. You can implement custom listeners for messages in the chat or DMs with hubot. You can integrate the hubot with Slack, IRC or any other adapters.

Mike: @hubot we love you

Hubot: Same here

Part 2: Using Bash

Hubot is easily listens to your commands and based on the given command to hubot, you can easily run some bash scripts. So I wrote my bash scripts for triggering some commands such as looking health of my load balancers which are attached to my services.

Part 3: Awesome Ansible

If you are writing your infrastructure as a code, then ansible is a good choice to do that. The interested thing that I did is to use Ansible and I integrated my Ansible scripts with Hubot. Based on the commands given to the hubot, it can execute ansible scripts, by which you can do the deployments and monitor the deployments activities of your team.

Part 4: One and only Python

In slack, one of the thing you want to notify you when your service fails. So you can easily write Notification Services in Python and integrate with the Slack, so when your services are not working, it will notify you that this service is not working. You can more python scripts according to your needs.

Overall, using ChatOps, your deployments will become more easy and increase the confidence and knowledge of the deployments for your services and the product in the team. It brings everyone on board for DevOps.

Convert iBeacon to Eddystone

| Comments

Beacons are new way for contextual suggestions to the users. Beacons uses BLE to transmit the information. This post is to explain how can you use Google Eddystone protocol without actually converting them into the Eddystone.

To get started with this article, you must know little knowledge of the Beacons and iBeacon & Eddystone Protocol.

Here is my logic to convert iBeacon to Eddystone Advertised id in Python:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
def advertised_id(self):
    """
    Convert uuid, major, minor into advertised id
    """
    namespace = '0x' + self.uuid[:8] + self.uuid[-12:]
    major, minor = map(int, (self.major, self.minor))
    temp_instance = self._append_hex(major, minor)
    instance = self._add_padding(temp_instance)
    beacon_id = self._append_hex(int(namespace, 16), instance)
    return base64.b64encode(self.long_to_bytes(beacon_id))

def _add_padding(self, instance):
    """
    Append padding of desired size
    """
    bit_length = (len(hex(instance)) - 2) * 4
    desired_padding_size = self.desired_instance_bits - bit_length
    padding = (2 ** desired_padding_size) - 1
    return self._append_hex(padding, instance)

def _append_hex(self, a, b):
    """
    Append hex number a in front of b
    """
    sizeof_b = 0

    # Count the number of bits in b
    while((b >> sizeof_b) > 0):
        sizeof_b += 1

    # make number of bits perfectly divisible by 4
    sizeof_b += 4 - ((sizeof_b % 4) or 4)

    return (a << sizeof_b) | b

def long_to_bytes(self, value, endianness='big'):
    """
    Convert hexadecimal into byte array
    """
    width = value.bit_length()
    width += 8 - ((width % 8) or 8)
    fmt = '%%0%dx' % (width // 4)
    s = unhexlify(fmt % value)

    if endianness == 'little':
        s = s[::-1]

    return s

For any doubt please write in comments.

Love Branching With Continuous Integration and Continuous Delivery

| Comments

I am writing this post after looking in lot of discussions and post about the Continuous Integration and Continuous Deleivery. Finally, I conclude that we can achieve a CI and CD using Git Feature branches also.

Here your targeted audience ask for the Web Application which can print “Hello World” when you land on the web page. We follow agile methodology and we will implement this project using CI and CD.

I am Python lover, so I am going to demonstrate a basic web application written in Flask. As a Open Source project we want to upload this package to PyPi.

Here is the complete developement details:

  • For Development: Flask
  • For CI: Travis
  • For Test Env: Own Host Machine
  • Deploy: PyPi

Suppose I have 3 members in my team and we are working on different features.

Lets come to continuous integrations. If you look on the Git Workflow, each one of us are working on our feaure branch. We use Travis CI, which will run all your test on every branch you create, on creating a Pull request(PR) for your develop branch of repository and also run tests on your develop branch when you merge the PR.

I was working on Feature-01 branch, now it got merged. Now the question is how we can achieve the Continuous Delivery. That is the biggest problem, how can we deploy it on our QA environment, beacause Travis is CI tool not the CD tool.

Now we have 2 options:

  • Use Docker Images and after every successful merge, deploy image to Docker Hub using Travis and Shell Script

  • Use Travis to deploy your artifacts on Amazon S3

I pick option 2. I will deploy my build artifacts to Amazon S3. Now in my QA environment I will write a Ansible script to pull the artifacts and deploy it to the QA environments. Now I can test my code in my QA environment.

Similary I can replicate this thing for further environments like Staging. This will achieve the Continuous Delivery purpose.

Now we delivered our project using Feature Branch and proper testing.

NOTE: I will release my source code soon on GitHub and will provide the link of the repository here.