Wednesday, April 23, 2014

Guest post on the Puppet Labs blog

Today I was honored by being asked to write a post on the official Puppet labs blog. That post is here. I wrote on my continuing efforts to use Puppet in a rootless environment.

Anyone looking to do more work in the rootless environment should check out Cory Osman's nonroot puppet repository here. You can see the rootless stdlib UTi is developing here.

And Daenny is putting together a group blog called puppet-a-day. I recommend it for staying current on Puppet best practices, and if you have cool things I recommend you submit them there!

Monday, April 21, 2014

Ganeti cloud init

Nightfly19 created a ganeti os definition based on our work with images last week. It is available now from https://github.com/pdxcat/ganeti_cloud-init.

This is a simple wrapper that lets you use images with ganeti virtual machines. This is huge because you can use prebuilt images from operating system vendors. Most OS distributions are crawling over each other trying to provide openstack-capable images. Now these images and convenience are available to the ganeti world.

From the readme:


The advantages to installing virtual machines from cloud images are:
  • Speed: This is much faster than using PXE boot or cdrom boot to load virtual machines.
  • Consistency: These images will be identical to amazon AMIs or machines running on an openstack cluster.
  • Lower effort: The OSUOSL Ganeti-instance-image project can produce a similar effect, but requires the administrator to provide their own dump/image files. These images can be pulled from the internet.

We hope to improve the functionality and installation instructions in the next couple of weeks. But for now, we'd love for some other ganeti users to try it out and let us know if it works in your environment, and what we can do to make it more generally usable.

Thursday, April 17, 2014

Using cloud images with Ganeti


Ganeti is an open source private cloud platform. It is primarily developed by Google. It is written in python, with bits of haskell thrown in for flair.

Virtual machine creating in ganeti is done through debootstrap or traditional methods such as PXE loading or cd-rom/iso interactive installations.

This approach is slower and more error prone than the image-based VM creation favored by other private cloud software such as Openstack.

Most major distributions that identify as 'cloud ready' produce regular 'cloud images' to be deployed to public clouds like Amazon or private clouds such as Openstack.

Out of the box, ganeti does not support these cloud images. This post will describe our super janky way of getting them to work. We present a method for launching ganeti vms quickly from stock cloud images. Future work may come of this where we develop what ganeti calls a 'os install template.' If we do that, expect a follow up post on this blog.

This work has been done in collaboration with nightfly19.


First, we collect the Ubuntu cloud image from the web:

# wget 'http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img'

This file is a disk image containing a disk label, a bootloader, a partition table, and a partition with the root filesystem of the ubuntu server os. From the README:


- <prefix>-disk1.img
  This is a qcow2 compressed disk image.  It can be uploaded to OpenStack
  or booted directly via kvm.  You should probably uncompress the image
  (qemu-img convert) before use in non-testing environment.



To prove this, lets inspect it with qemu-img:

# qemu-img info precise-server-cloudimg-amd64-disk1.img
image: precise-server-cloudimg-amd64-disk1.img
file format: qcow2
virtual size: 2.2G (2361393152 bytes)
disk size: 248M
cluster_size: 65536




This file is in the format qcow2 which stands for qemu copy on write version 2. This is the file format used by kvm with qemu. Since we're using ganeti  of raw kvm, we need to apply this image to the disk of a ganeti virtual machine. Furthermore, this image is compressed so we will first uncompress it:


# qemu-img convert -O qcow2 precise-server-cloudimg-amd64-disk1.img precise64.img

This command will take several seconds to run.

# ls
precise64.img  precise-server-cloudimg-amd64-disk1.img

Now we must convert the uncompressed qcow image into a raw disk, the exact byte for byte representation of a block device, suitable for dding directly onto the ganeti instance block device.

# qemu-img convert precise64.img -O raw precise64raw.img

We can then verify that the precise64raw.img file contains valid partition data:


# fdisk -l precise64raw.img

Disk precise64raw.img: 2361 MB, 2361393152 bytes
4 heads, 32 sectors/track, 36032 cylinders, total 4612096 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0000a23e

           Device Boot      Start         End      Blocks   Id  System
precise64raw.img1   *        2048     4194303     2096128   83  Linux


This precise64raw.img can be reused every time we want to create a new virtual machine.


The file we'll use to customize each instance of the ubuntu cloud vm is created by a script cloud-localds. On newer ubuntu releases, this script is available from packages. On older releases download it from here: https://gist.github.com/cpswan/6221258

This configuration is part of an ubuntu specification called NoCloud. You can research it for more control flags.


#cat > my-user-data <<EOF
#cloud-config
password: passw0rd
chpasswd: { expire: False }
ssh_pwauth: True
EOF


# /usr/local/bin/cloud-localds my-seed.img my-user-data
wrote my-seed.img with filesystem=iso9660 and diskformat=raw







With these two images created (my-seed.img is very small,) we can create the ganeti instance. Assuming you have set up networking in your environment, you can use this:

# gnt-instance add --disk-template plain -o image+default --disk 0:size=10g --net 0:mac="cc:aa:77:bb:1c:2d" -H serial_console=true --no-install --no-start solarium.cat.pdx.edu 
Thu Apr 17 01:16:04 2014  - INFO: Selected nodes for instance solarium.cat.pdx.edu via iallocator hail: kvm-milkyway.cat.pdx.edu
Thu Apr 17 01:16:05 2014 * disk 0, size 10.0G
Thu Apr 17 01:16:05 2014 * creating instance disks...
Thu Apr 17 01:16:05 2014 adding instance solarium.cat.pdx.edu to cluster config
Thu Apr 17 01:16:07 2014  - INFO: Waiting for instance solarium.cat.pdx.edu to sync disks
Thu Apr 17 01:16:07 2014  - INFO: Instance solarium.cat.pdx.edu's disks are in sync

# gnt-instance modify --disk add:size=1g solarium.cat.pdx.edu
Thu Apr 17 01:20:07 2014 * disk 1, size 1.0G
Thu Apr 17 01:20:08 2014  - INFO: Waiting for instance solarium.cat.pdx.edu to sync disks
Thu Apr 17 01:20:08 2014  - INFO: Instance solarium.cat.pdx.edu's disks are in sync
Modified instance solarium.cat.pdx.edu
 - disk/1 -> add:size=1024,mode=rw
Please don't forget that most parameters take effect only at the next (re)start of the instance initiated by ganeti; restarting from within the instance will not be enough.









Make sure the instance is down:



# gnt-instance list

solarium.cat.pdx.edu        kvm        image+default     kvm-milkyway.cat.pdx.edu   ADMIN_down      -



Activate its disks:

# gnt-instance activate-disks solarium
kvm-milkyway.cat.pdx.edu:disk/0:/dev/ganeti/78f5c111-f6a0-4446-a590-66ed1d6c2b04.disk0
kvm-milkyway.cat.pdx.edu:disk/1:/dev/ganeti/4a9a53e0-e2b7-49c7-a09f-d0dafda9fb28.disk1

DD the images to the disks:


# dd if=precise64raw.img of=/dev/ganeti/78f5c111-f6a0-4446-a590-66ed1d6c2b04.disk0
4612096+0 records in
4612096+0 records out
2361393152 bytes (2.4 GB) copied, 68.8486 s, 34.3 MB/s

# dd if=my-seed.img of=/dev/ganeti/4a9a53e0-e2b7-49c7-a09f-d0dafda9fb28.disk1
732+0 records in
732+0 records out
374784 bytes (375 kB) copied, 0.0289169 s, 13.0 MB/s

Now deactivate the disks and start the instance:

# gnt-instance start solarium
Waiting for job 4670 for solarium.cat.pdx.edu ...

This machine by default will write all its output to the serial console. Watch this with:

# gnt-instance console solarium

<snip>

This will show a combination of standard dmesg from a system booting and the output of the cloud-init/NoCloud scripts. Part of what will be echoed are the ip address and routing table of the instance:

mountall: Event failed
ci-info: lo    : 1 127.0.0.1       255.0.0.0       .
ci-info: eth0  : 1 131.252.208.5   255.255.255.0   cc:aa:77:bb:1c:2d
ci-info: route-0: 0.0.0.0         131.252.208.1   0.0.0.0         eth0   UG
ci-info: route-1: 131.252.208.0   0.0.0.0         255.255.255.0   eth0   U
cloud-init start running: Thu, 17 Apr 2014 08:38:07 +0000. up 4.66 seconds

Then you will see the host key randomart pictures for ssh as well as the fingerprints:


ec2: #############################################################
ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
ec2: 1024 9e:55:82:94:82:db:59:c1:60:1f:94:38:3b:36:af:83  root@ubuntu (DSA)
ec2: 256 22:b5:0c:45:83:5e:ee:a4:fb:4f:96:c7:01:88:26:7a  root@ubuntu (ECDSA)
ec2: 2048 cf:0f:ea:cf:ff:89:ed:7d:f5:cb:3e:48:e1:9e:2d:72  root@ubuntu (RSA)
ec2: -----END SSH HOST KEY FINGERPRINTS-----
ec2: #############################################################
-----BEGIN SSH HOST KEY KEYS-----
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAQAEcZ08DnGNr6HF8UcRsOge98sYVOm37sSZyj+NplKtBl+oL6lqCmwqgNwrz1F497ym2XVT7IEpXJ5NyuMqsk= root@ubuntu
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCvIu5vC8FaGe4Hk+A280VGi07CsXFiDXpuA5q2ajTnMKa0Wy8PSWArZthSZrYpeoHYFL0ac+8mxfQMv8wafnxSiNkaYLOQ1GZr0lZr08oGry9FzD4w+kzqRudmE29aivffmNj4Qe
J6P86nQ8j/0M0b7nQhn7SLKJENQzuucsygVaPptRIu+Ntuap+F2sALPVN+oDpVt/IOLWnGg7ht5dG669OadLFbVebVhITNVah4CKSHmZJ37LqA64ArTpCw8bT60074yjlhDxLttUurVOFitnPaJfNzH8wBQzdN5GeU1Yy7iTNs6wht
vLn8pW+xt4rCHhlbGoEsx5sUzbscrx4p root@ubuntu
-----END SSH HOST KEY KEYS-----


The machine should dhcp itself and be  ready to accept password authentication momentarily.



# ssh ubuntu@solarium.cat.pdx.edu
The authenticity of host 'solarium.cat.pdx.edu (131.252.208.5)' can't be established.
ECDSA key fingerprint is 22:b5:0c:45:83:5e:ee:a4:fb:4f:96:c7:01:88:26:7a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'solarium.cat.pdx.edu,131.252.208.5' (ECDSA) to the list of known hosts.
ubuntu@solarium.cat.pdx.edu's password:
Welcome to Ubuntu 12.04.4 LTS (GNU/Linux 3.2.0-60-virtual x86_64)
...





This was a pretty rough cut of the process. We hope to streamline it and make software that evens this process out. One obvious place to look for making things faster is placing the precise64raw.img file in /dev/shm so that we read from memory and write to disk, instead of reading and writing from disk.


References:

http://cloud-images.ubuntu.com/daily/server/precise/current/
https://help.ubuntu.com/community/UEC/Images
http://docs.ganeti.org/ganeti/master/html/design-ovf-support.html
http://docs.ganeti.org/ganeti/master/html/ovfconverter.html
http://serverfault.com/questions/466269/ganeti-installing-centos-rhel-instance
https://groups.google.com/forum/?fromgroups=#!topic/ganeti/YH_5pM8rAvc
http://openstack.redhat.com/Image_resources
https://nsrc.org/workshops/2014/sanog23-virtualization/raw-attachment/wiki/Agenda/ex-ganeti-create-vm.htm
http://www.slideshare.net/ramereth/ganeti-handson-walkthru-part-2-linuxcon-2012
https://help.ubuntu.com/community/UEC/Images
https://gist.github.com/cpswan/6221258





Thursday, April 3, 2014

RHEL 7 and Installing EPEL

RedHat 7 is nearly here. It is out in beta right now. This means that I've installed it for reasons that are clear to no one. It also means we'll get a number of posts related to making it do the most basic things.





Notably, installing EPEL 7 right now is not obvious. Typically there is a .rpm file that will install the repository for you. That is true in this case but the link to it is hard to find.

Additionally, not all packages from EPEL 6 are in EPEL 7. If you find a package you need, reach out to the maintainer to get it migrated. The process is relatively painless.



To install EPEL on Red Hat Enterprise Linux 7 Beta:


wget 'http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.1.noarch.rpm'


nibz@computer$ wget 'http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.1.noarch.rpm'
--2014-04-03 14:30:34-- http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.1.noarch.rpm
Resolving dl.fedoraproject.org (dl.fedoraproject.org)... 209.132.181.27, 209.132.181.23, 209.132.181.24, ...
Connecting to dl.fedoraproject.org (dl.fedoraproject.org)|209.132.181.27|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13200 (13K) [application/x-rpm]
Saving to: ‘epel-release-7-0.1.noarch.rpm’

100%[======================================>] 13,200 84.0KB/s in 0.2s 

2014-04-03 14:30:35 (84.0 KB/s) - ‘epel-release-7-0.1.noarch.rpm’ saved [13200/1

 Then install the rpm file:

[skrum@pdxudev98 Downloads]$ yum update
Loaded plugins: langpacks, product-id, subscription-manager
You need to be root to perform this command.
[skrum@pdxudev98 Downloads]$ sudo !!
sudo yum update
Loaded plugins: langpacks, product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
epel/x86_64/metalink                                     |  14 kB     00:00    
epel                                                     | 3.7 kB     00:00    
(1/2): epel/x86_64/group_gz                                | 243 kB   00:00    
(2/2): epel/x86_64/primary_db                              | 1.7 MB   00:04    
No packages marked for update

Wednesday, April 2, 2014

Pro Puppet 2nd Edition

I wrote a book! Well, me and some people wrote a book. The original is by James Turnbull and Jeff McCune. The update is by myself, William Van Hevelingen, and Ben Kero.




Pro Puppet 2nd Edition is an update to the original Pro Puppet, newly updated for Puppet 3.

We cover a number of topics new since the original came out:


  • Hiera (entire chapter devoted to this topic)
  • Puppet DB
  • Puppet Enterprise
  • PuppetBoard
  • r10k
  • rspec-puppet
  • rspec-system (now called Beaker)
And we have a updated material on other topics such as:

  • Puppet language
  • Puppet syntax
  • Puppet modules
  • Best practices
  • Scaling Puppet
  • Exported resources
  • Marionette Collective (MCollective)
If you have any feedback on this title please drop me a line.

The source code for the examples can be found here:

https://github.com/pro-puppet/source-code

And the eratta is being collected at:


https://github.com/pro-puppet/pro-puppet-errata