21. Where can I find example compose files?  

There are many examples of Compose files on github.
Compose documentation
  • Installing Compose
  • Get started with Django
  • Get started with Rails
  • Get started with WordPress
  • Command line reference
  • Compose file reference

22. Are you operationally prepared to manage multiple languages/libraries/repositories?  

Last year, we encountered an organization that developed a modular application while allowing developers to “use what they want” to build individual components. It was a nice concept but a total organizational nightmare — chasing the ideal of modular design without considering the impact of this complexity on their operations.

The organization was then interested in Docker to help facilitate deployments, but we strongly recommended that this organization not use Docker before addressing the root issues. Making it easier to deploy these disparate applications wouldn’t be an antidote to the difficulties of maintaining several different development stacks for long-term maintenance of these apps.

23. Do you already have a logging, monitoring, or mature deployment solution?  

Chances are that your application already has a framework for shipping logs and backing up data to the right places at the right times. To implement Docker, you not only need to replicate the logging behavior you expect in your virtual machine environment, but you also need to prepare your compliance or governance team for these changes. New tools are entering the Docker space all the time, but many do not match the stability and maturity of existing solutions. Partial updates, rollbacks and other common deployment tasks may need to be reengineered to accommodate a containerized deployment.

If it’s not broken, don’t fix it. If you’ve already invested the engineering time required to build a continuous integration/continuous delivery (CI/CD) pipeline, containerizing legacy apps may not be worth the time investment.

24. Will cloud automation overtake containerization?  

At AWS Re:Invent last month, Amazon chief technology officer Werner Vogels spent a significant portion of his keynote on AWS Lambda, an automation tool that deploys infrastructure based on your code. While Vogels did mention AWS’ container service, his focus on Lambda implies that he believes dealing with zero infrastructure is preferable to configuring and deploying containers for most developers.

Containers are rapidly gaining popularity in the enterprise, and are sure to be an essential part of many professional CI/CD pipelines. But as technology experts and CTOs, it is our responsibility to challenge new methodologies and services and properly weigh the risks of early adoption. I believe Docker can be extremely effective for organizations that understand the consequences of containerization — but only if you ask the right questions.

25. You say that ansible can take up to 20x longer to provision, but why?  

Docker uses cache to speed up builds significantly. Every command in Dockerfile is build in another docker container and it’s results are stored in separate layer. Layers are built on top of each other.

Docker scans Dockerfile and try to execute each steps one after another, before executing it probes if this layer is already in cache. When cache is hit, building step is skipped and from user perspective is almost instant.

When you build your Dockerfile in a way that the most changing things such as application source code are on the bottom, you would experience instant builds.

You can learn more about caching in docker in this article.

Another way of amazingly fast building docker images is using good base image – which you specify in FROM command, you can then only make necessary changes, not rebuild everything from scratch. This way, build will be quicker. It’s especially beneficial if you have a host without the cache like Continuous Integration server.

Summing up, building docker images with Dockerfile is faster than provisioning with ansible, because of using docker cache and good base images. Moreover you can completely eliminate provisioning, by using ready to use configured images such stgresus.

$ docker run --name some-postgres -d postgres No installing postgres at all - it's ready to run.

26. Also you mention that docker allows multiple apps to run on one server.  

It depends on your use case. You probably should split different components into separate containers. It will give you more flexibility.

Docker is very lightweight and running containers is cheap, especially if you store them in RAM – it’s possible to spawn new container for every http callback, however it’s not very practical.

At work I develop using set of five different types of containers linked together.

In production some of them are actually replaced by real machines or even clusters of machine – however settings on application level don’t change.

Here you can read more about linking containers.

It’s possible, because everything is communicating over the network. When you specify links in docker run command – docker bridges containers and injects environment variables with information about IPs and ports of linked children into the parent container.

This way, in my app settings file, I can read those values from environment. In python it would be:

import os VARIABLE = os.environ.get('VARIABLE')

There is a tool which greatly simplifies working with docker containers, linking included. It’s called fig.

27. what does the deploy process look like for dockerized apps stored in a git repo?  

It depends how your production environment looks like.

Example deploy process may look like this:
  • Build an app using docker build . in the code directory.
  • Test an image.
  • Push the new image out to registry docker push myorg/myimage.
  • Notify remote app server to pull image from registry and run it (you can also do it directly using some configuration management tool).
  • Swap ports in a http proxy.
  • Stop the old container.
You can consider using amazon elastic beanstalk with docker or dokku.

Elastic beanstalk is a powerful beast and will do most of deployment for you and provide features such as autoscaling, rolling updates, zero deployment deployments and more.

28. What are the Advantages of Using Docker Containers?  

Docker containers offer various advantages over traditional installers:
  • Using Docker containers enables you to deploy ready-to-run, portable software. Containerized applications are not installed; they simply run within their containers.
  • Using Docker containers eliminates problems such as software conflicts, driver compatibility issues, and library conflicts. The architecture of Docker containers enables you to isolate resources.
  • Docker containers empower microservice architectures, in which monolithic applications are decoupled into minimalist, specialized containers.
  • Docker containers simplify DevOps. Developers can work inside the containers, and operations engineers can work in parallel outside the containers.
  • The use of Docker containers speeds up continuous integration. Traditional installer development teams struggle with rapid build-test-deploy cycles. Docker containers ensure that applications run identically in development, test, and production environments.
  • Docker containers can be run from anywhere: computers, local servers, and private and public clouds.
  • Deploying your products as Docker containers helps you reach new customer segments.

29. Do you think cloud technology development has been heavily influenced by open source development?  

I think open source software is closely tied to cloud computing. Both in terms of the software running in the cloud and the development models that have enabled the cloud. Open source software is cheap, it’s usually low friction both from an efficiency and a licensing perspective.

30. How-do-you-think-Docker-will-change-virtualization-and-cloud-environments?Do-you-think-cloud-technology-has-a-set-trajectory,-or-is-there-still-room-for-significant-change?  

I think there are a lot of workloads that Docker is ideal for, as I mentioned earlier both in the hyper-scale world of many containers and in the dev-test-build use case. I fully expect a lot of companies and vendors to embrace Docker as an alternative form of virtualization on both bare metal and in the cloud.

As for cloud technology’s trajectory. I think we’ve seen significant change in the last couple of years. I think they’ll be a bunch more before we’re done. The question of OpenStack and whether it will succeed as an IAAS alternative or DIY cloud solution. I think we’ve only touched on the potential for PAAS and there’s a lot of room for growth and development in that space. It’ll also be interesting to see how the capabilities of PAAS products develop and whether they grow to embrace or connect with consumer cloud-based products.

31. Can you give us a quick rundown of what we should expect from your Docker presentation at OSCON this year?  

It’s very much a crash course introduction to Docker. It’s aimed at Developers and SysAdmins who want to get started with Docker in a very hands on way. We’ll teach the basics of how to use Docker and how to integrate it into your daily workflow.

32. Why do my services take 10 seconds to recreate or stop?  

Compose stop attempts to stop a container by sending a SIGTERM. It then waits for a default timeout of 10 seconds. After the timeout, a SIGKILL is sent to the container to forcefully kill it. If you are waiting for this timeout, it means that your containers aren’t shutting down when they receive the SIGTERM signal.

There has already been a lot written about this problem of processes handling signals in containers.

To fix this problem, try the following:

Make sure you’re using the JSON form of CMD and ENTRYPOINT in your Dockerfile.

For example use ["program", "arg1", "arg2"] not"program arg1 arg2". Using the string form causes Docker to run your process using bash which doesn’t handle signals properly. Compose always uses the JSON form, so don’t worry if you override the command or entrypoint in your Compose file.

-If you are able, modify the application that you’re running to add an explicit signal handler for SIGTERM.

-Set the stop_signal to a signal which the application knows how to handle:

-web: build: . stop_signal: SIGINT

-If you can’t modify the application, wrap the application in a lightweight init system (like s6) or a signal proxy (like dumb-init or tini). Either of these wrappers take care of handling SIGTERM properly.

33. How do I run multiple copies of a Compose file on the same host?  

Compose uses the project name to create unique identifiers for all of a project’s containers and other resources. To run multiple copies of a project, set a custom project name using the -p command line option or the COMPOSE_PROJECT_NAME environment variable.

34. How exactly containers (Docker in our case) are different from hypervisor virtualization (vSphere)?What are the benefits?  

To run an application in virtualized environment (e.g. vSphere), we first need to create a VM, install an OS inside and only then deploy the application.To run same application in docker all you need is to deploy that application in Docker. There is no need of additional OS layer. You just deploy the application with its dependent libraries, the rest (kernel, etc.) is provided by Docker engine.This table from a Docker official website shows it in a quite clear way.

Another benefit of Docker, from my perspective, is speed of deployment. Lets imagine a scenario:

ACME inc. needs to virtualize application GOOD APP for testing purposes.

Conditions are:

Application should run in an isolated environment.

Application should be available to be redeployed at any moment in a very fast manner.

Solution 1

In vSphere world what we would usually do, is:

Deploy OS in a VM running on vSphere.

Deploy an application inside OS.

Create a template.

Redeploy the template in case of need. Time of redeployment around 5-10 minutes.

Sounds great! Having app up and running in an hour and then being able to redeploy it in 5 minutes.

Solution 2

Deploy Docker.

Deploy the app GOODAPP in container.

Redeploy the container with app when needed.

Benefits: No need of deploying full OS for each instance of the application. Deploying a container takes seconds.

.Net Interview Question

PHP Interview Question

Java Interview Question

AngularJS Interview Questions