Monday, December 15, 2014

Puppet cert inspector

Today while poking around in the puppet source code, I came across a utility in the ext/ directory called cert_inspector. This seems to be a little utility that opens up certificates and interrogates them for useful data. This is better than what I usually do, which is incanting openssl directly. It also is capable of chewing up an entire /var/lib/puppet/ssl directory and dumping information on every cert and key it finds. See the output below:



 (master u=)$: ./ext/cert_inspector ~/.puppet/ssl/certs/ca.pem
/home/nibz/.puppet/ssl/certs/ca.pem:
  Certificate assigning name /CN=Puppet CA: zabava.cat.pdx.edu to key</CN=Puppet CA: zabava.cat.pdx.edu>
    serial number 1
    issued by /CN=Puppet CA: zabava.cat.pdx.edu
    signed by key</CN=Puppet CA: zabava.cat.pdx.edu>

 (master u=)$: ./ext/cert_inspector ~/.puppet/ssl/
WARNING: file "/home/nibz/.puppet/ssl/public_keys/hunner_what_r_u_doin.pem" could not be interpreted
WARNING: file "/home/nibz/.puppet/ssl/public_keys/hunner_stahp.pem" could not be interpreted
WARNING: file "/home/nibz/.puppet/ssl/public_keys/maxwell.hsd1.or.comcast.net.pem" could not be interpreted
WARNING: file "/home/nibz/.puppet/ssl/public_keys/hunner.pem" could not be interpreted
/home/nibz/.puppet/ssl/certs/ca.pem:
  Certificate assigning name /CN=Puppet CA: zabava.cat.pdx.edu to key</CN=Puppet CA: zabava.cat.pdx.edu>
    serial number 1
    issued by /CN=Puppet CA: zabava.cat.pdx.edu
    signed by key</CN=Puppet CA: zabava.cat.pdx.edu>

/home/nibz/.puppet/ssl/certificate_requests/hunner.pem:
  Certificate request for /CN=hunner having key key</CN=hunner>
    signed by key</CN=hunner>

/home/nibz/.puppet/ssl/certificate_requests/hunner_stahp.pem:
  Certificate request for /CN=hunner_stahp having key key</CN=hunner_stahp>
    signed by key</CN=hunner_stahp>

/home/nibz/.puppet/ssl/certificate_requests/hunner_what_r_u_doin.pem:
  Certificate request for /CN=hunner_what_r_u_doin having key key</CN=hunner_what_r_u_doin>
    signed by key</CN=hunner_what_r_u_doin>

/home/nibz/.puppet/ssl/private_keys/hunner.pem:
  Private key for key</CN=hunner>

/home/nibz/.puppet/ssl/private_keys/hunner_stahp.pem:
  Private key for key</CN=hunner_stahp>

/home/nibz/.puppet/ssl/private_keys/hunner_what_r_u_doin.pem:
  Private key for key</CN=hunner_what_r_u_doin>

/home/nibz/.puppet/ssl/private_keys/maxwell.hsd1.or.comcast.net.pem:
  Private key for key</home/nibz/.puppet/ssl/private_keys/maxwell.hsd1.or.comcast.net.pem>

Tuesday, December 9, 2014

Testing Puppet node definitions

Sometimes Puppet node definitions get a little hairy. Here is a quick trick I use to validate them manually. This is inspired by this review.

Given a regex node definition create a test file called node.pp:

node /^git(-frontend\d+)?\.openstack\.org$/ { 
  notify { 'match': }


 Then, using the --certname="testnode" syntax to puppet apply, do some quick spot testing to see what happens.

$: puppet apply nodedef.pp 
Error: Could not find default node or by name with 'maxwell.pdx.edu, maxwell.pdx, maxwell' on node maxwell.pdx.edu
Error: Could not find default node or by name with 'maxwell.pdx.edu, maxwell.pdx, maxwell' on node maxwell.pdx.edu
$: puppet apply --certname='git.openstack.org' nodedef.pp  
Notice: Compiled catalog for git.openstack.org in environment production in 0.02 seconds
Notice: match
Notice: /Stage[main]/Main/Node[git-frontendd.openstack.org]/Notify[match]/message: defined 'message' as 'match'
Notice: Finished catalog run in 0.03 seconds
$: puppet apply --certname='git48.openstack.org' nodedef.pp  
Error: Could not find default node or by name with 'git48.openstack.org, git48.openstack, git48, maxwell.pdx.edu, maxwell.pdx, maxwell' on node git48.openstack.org
Error: Could not find default node or by name with 'git48.openstack.org, git48.openstack, git48, maxwell.pdx.edu, maxwell.pdx, maxwell' on node git48.openstack.org
$: puppet apply --certname='git-frontend01.openstack.org' nodedef.pp  
Notice: Compiled catalog for git-frontend01.openstack.org in environment production in 0.02 seconds
Notice: match
Notice: /Stage[main]/Main/Node[git-frontendd.openstack.org]/Notify[match]/message: defined 'message' as 'match'
Notice: Finished catalog run in 0.03 seconds 
 
  
This gives us the confidence to push this node definition to production without worrying about affecting existing git servers.

Monday, December 8, 2014

#puppethack

#puppethack is the new version of the Puppet triage-a-thon. It is a decentralized hackathon for open source Puppet projects. This year I participated mostly by contributing to the puppetlabs-rabbitmq module. I worked closely with Colleen Murphy of Puppet Labs on this.

When we started there were 31 outstanding pull requests. Now there are only 21.  And five of those have been opened during or after the hackathon.


I am most proud of my beaker testing PR which added beaker (acceptance and integration) testing to the rabbitmq_user, rabbitmq_vhost, and rabbitmq_policy types.

Overall #puppethack was a success and I am glad I participated. I want to thank my employer, HP, for allowing me to participate in the open source ecosystem. I am looking forward to doing it next year!

Sunday, December 7, 2014

Puppet Functions in stdlib

You should read up on the Puppet Functions in puppetlabs/stdlib. Seriously.

If you consider yourself a serious Puppet user, i.e. you use it more than twice a month, you owe it to yourself to read through them. The README has a brief description of the functions that are available. Every time I read through it, I find more useful functions have been added. And with stronger protections for function composability, there is no reason not to use functions all the time, every time.

Even if all you do is learn about the existence of the validation functions, you will be able to in two lines of code make your code more robust and easier on users.

For extra credit check out the puppet-community extlib module which has more functions not deemed cool enough for puppet core.

To eat my own words, I'll now post some functions whose existence I did not know about:

  • chomp
  • chop
  • defined_with_params
  • diference
  • delete_undef_values
  • empty (OMG USEFUL)
  • get_param (this changes eveeeeerything)
  • private
  • reject (duuuuuudeee)
  • squeeze
Happy hacking, fellow Puppeteers!

Saturday, November 29, 2014

Bashrc: Gerrit

Gerrit is the code review and git hosting tool used by OpenStack. It is common courtesy to mark a change 'work in progress' when you have submitted it but it is not ready for others to review. Others will see the work in progress bit is set and not waste time reviewing patches that are not ready yet.

The 'git review' tool is good for submitting changes, but then I have to go to the web ui to mark a change as wip. I can use gertty for this, but again that means going and searching it out.

I have added the following function to my .bashrc:

gerrit () {

    if [ $1 = "wip" ]; then
        commit=`git show | grep -m1 commit | cut -d " " -f 2 2>/dev/null`
        if [ -z $commit ]; then
            echo "Not in git directory?"
            return 1
        fi
        gerrit review $commit --workflow -1
        return $?
    fi 
    username=`git config gitreview.username`

    ssh -o VisualHostKey=no -p 29418 $username@review.openstack.org gerrit $*
}







 


This function enables some pretty cool features. It takes as arguments any arguments that the gerrit ssh command line interface takes. Meaning you can do things like these:


$: gerrit ls-projects | grep puppet
openstack-infra/puppet-apparmor
openstack-infra/puppet-dashboard
openstack-infra/puppet-github
openstack-infra/puppet-httpd
openstack-infra/puppet-jenkins
openstack-infra/puppet-kibana
openstack-infra/puppet-pip
openstack-infra/puppet-storyboard
openstack-infra/puppet-vcsrepo
openstack-infra/puppet-vinz
openstack-infra/puppet-yum
openstack-infra/puppet-zuul
openstack/tripleo-puppet-elements
stackforge/puppet-ceilometer
stackforge/puppet-ceph
stackforge/puppet-cinder
stackforge/puppet-designate
stackforge/puppet-glance
stackforge/puppet-heat
stackforge/puppet-horizon
stackforge/puppet-ironic
stackforge/puppet-keystone
stackforge/puppet-manila
stackforge/puppet-monasca
stackforge/puppet-n1k-vsm
stackforge/puppet-neutron
stackforge/puppet-nova
stackforge/puppet-openstack
stackforge/puppet-openstack-cloud
stackforge/puppet-openstack-specs
stackforge/puppet-openstack_dev_env
stackforge/puppet-openstack_extras
stackforge/puppet-openstacklib
stackforge/puppet-sahara
stackforge/puppet-swift
stackforge/puppet-tempest
stackforge/puppet-trove
stackforge/puppet-tuskar
stackforge/puppet-vswitch
stackforge/puppet_openstack_builder
$: gerrit -h
gerrit [COMMAND] [ARG ...] [--] [--help (-h)]

 --          : end of options
 --help (-h) : display this help text

Available commands of gerrit are:

   ban-commit           Ban a commit from a project's repository
   create-account       Create a new batch/role account
   create-group         Create a new account group
   create-project       Create a new project and associated Git repository
   flush-caches         Flush some/all server caches from memory
   gc                   Run Git garbage collection
   gsql                 Administrative interface to active database
   ls-groups            List groups visible to the caller
   ls-members           List the members of a given group
   ls-projects          List projects visible to the caller
   ls-user-refs         List refs visible to a specific user
   plugin              
   query                Query the change database
   receive-pack         Standard Git server side command for client side git push
   rename-group         Rename an account group
   review               Verify, approve and/or submit one or more patch sets
   set-account          Change an account's settings
   set-members          Modify members of specific group or number of groups
   set-project          Change a project's settings
   set-project-parent   Change the project permissions are inherited from
   set-reviewers        Add or remove reviewers on a change
   show-caches          Display current cache statistics
   show-connections     Display active client SSH connections
   show-queue           Display the background work queues
   stream-events        Monitor events occurring in real time
   test-submit         
   version              Display gerrit version

See 'gerrit COMMAND --help' for more information.


We also inspect the first command to see if it is 'wip.' This allows us to create new commands to the gerrit cli without changing any code or having access to the gerrit server. What I've added is the 'wip' command which inspects the local git repository for the latest change, and marks it as wip with gerrit. This changes my workflow to look like this:



$: git review
$: gerrit wip

This is much shorter, more unixy, and doesn't require me to hop out of the terminal. Future improvements would be to identify if you are in a stack of changes and wip all of them.

Saturday, November 22, 2014

Leaving Blogger

It's time to join the future and host my own blog. I'm also going to do the regular stuff of evaluating technologies and picking one, developing tools that other people have developed, etc.

I've debated doing this many times. I've always felt that I didn't want to be that person who only blogs about blogging. I feel after a couple years of pretty consistent blogging, that I won't totally ignore my blog after putting a lot of effort into it. Plus this is an excuse to build a website, something I am embarrassingly weak in.

I'm pretty sure the next location of my blog will be http://spencerkrum.com but I'm not entirely sure.

Right now the plan is to move to pelican because python and restructured text are both technologies that have crossover with OpenStack, and I like that. What I may end up writing is tooling to pull my old posts out of Blogger.

I have no idea if blogger will let me put a redirect in for my subdomain on their domain. I have to think that literally no one at google works on this now right? After reader went away I was sure this would get the axe and yet it remains...

Wednesday, November 12, 2014

Guest Post on Puppet-a-day

Today I have been honored to post a guest post on the puppet-a-day community blog. You can find my post here.


Big thanks to @daenney for making the puppet-a-day thing go.

Wednesday, November 5, 2014

Future posts/projects

There are a list of projects I want to do or see get done, and posts I'd like to write. For lack of better place, I'll simply post the names here and we'll see where it goes.


  • Puppet-kick replacement
    • Puppet kick really is super dead
    • need new daemon/maybe new command line utility
    • some kindof daemon that listens for http kicks and fires puppet
    • could re-use the kick api, or bold new territory
      • /api/status
        • can return:
          • 'puppet running'
          • 'puppet not  running'
          • 'puppet last run was: <bool: success> <bool: changes>'
          • 'puppet admin disabled'
      • /api/run
        • async
        • can fire a run
        • can fire a run against an environment
        • maybe noop?
    • Fuck on im not dealing with auth.conf. Maybe just a list of dns or fingerprints that are allowed to fire a kick?
  • PuppetBoard-like web applications
    • CA signing/revoking web appication
    • DB-backed, ENC
  • Barbican integration
    • Use openstack-barbican
    • Secret storage as a backend for hiera
    • Certificate Authority api, possibly become ref implementation of external CA interaction for puppet
  • Ceph Continuation
    • continue exploration of ceph tool
  • AFS exploration
    • continue exploration of AFS/Kerb
  • Terraform w/ OpenStack
    • It is so freaking close, and yet
  • Hodor
    • hodor is a dumb script I wrote around nova to get work done 
  • GPG mapping
    • games i've played with using javascript to visualize the gpg web of trust

These will be done(or not done) in any random order. If you want to see one get done give me some feedback.

Monday, October 20, 2014

Cool diagram tool

So tehmaze wrote a sweet command line graphing utility.

Jason Dixon retweeted this on twitter:



Which led me to do this:


Which is a modified version of what Jason Dixon taught me in this video from pdx devops days:





Diagram: https://github.com/tehmaze/diagram
My line: curl -s 'http://graphite.openstack.org/render/?target=sum(stats_counts.gerrit.event.*)&format=json&from=-24hrs' | json_pp | grep ',' | grep -v \] | tr -d "," | diagram





Thursday, September 18, 2014

Puppet Conf 2014!

Puppet Conf is next week. I'll be attending with Krinkle and Blkperl. We will have a table and books to sign at the 'Meet the Authors' event on the first evening.

Hope to see you all there!

Friday, August 1, 2014

Puppet Manifests and operating system updates

This is a post about my opinion on how we should be using the params.pp pattern. It originated from review here.

This is what the code used to look like:

$ruby_bundler_package = 'ruby-bundler'


It worked great on precise. It still does. However, when trusty came out, the changed the name of the package to bundler. This broke the above puppet code.

The fix was simple:


# will install ruby-bundler for all Debian distros
# or for Ubuntu trusty
if ($::operatingsystem == 'Debian') or ($::lsbdistcodename == 'trusty') {
  $ruby_bundler_package = 'bundler'
}
else {
  $ruby_bundler_package = 'ruby-bundler'

}


This is made a little more complicated because it handles Debian in addition to Ubuntu nodes.

However, there is a better way:

# will install ruby-bundler for Ubuntu Precise
# and bundler for Debian or newer Ubuntu distros
if ($::lsbdistcodename == 'precise') {
  $ruby_bundler_package = 'ruby-bundler'
}else {
  $ruby_bundler_package = 'bundler'
}



Instead of adding those names to an ever expanding case statement, this special cases the precise machines. In addition to being shorter, this better future-proofs the code. Inevitably this will be run on Utopic or later Ubuntu versions, and using the trusty package name by default will automatically work on these newer versions.

Now, generally it is best practice in case statements to fail on default, using the else statement like this is a violation of the spirit of that rule. But if statements like this are common in params.pp and will save you time in the future.

You can ask yourself as a follow up, "where else did i special case the new version of the operatingsystem, instead of special casing the old version."

Tuesday, July 15, 2014

OSCON Talk: "Pro Puppet"

I'm giving a talk at OSCON 2014! It's called Pro Puppet and will cover the techniques I think anyone can use to get the most out of using Puppet. To celebrate and promote, we've created a word cloud of all the words in Pro Puppet 2'nd Edition. Please come watch me at 4:10 pm on Wednesday!



Saturday, July 12, 2014

CephFS as a replacement for NFS: Part 1

This is the first in a series of posts about CephFS. The overall goal is to evaluate and characterize the behavior of CephFS and determine if it can be a reliable replacement for NFS.



The current use case of NFS is 400G-1T 'stashes' shared from an NFS server to hundreds of Linux/Unix clients in an academic setting. In some cases these stashes are accessed by a single user on a single machine, in some cases dozens of users access them across dozens of machines.

Drawbacks to the current situation are the same as any situation involving NFS:

  • Security is a joke
  • Single über-powerful NFS filers present a SPOF
  • Bigger and bigger filers get more and more expensive
  • Forced to use proprietary and expensive ZFS on Solaris
  • Backing up is becoming a problem as total dataset size becomes more than a tape backup system can really hold
  • No tiering of storage. The whole dataset either goes on the fast disks or the slow disks
There are also some advantages of this system:

  • NFS is old faithful
  • Every operating system supports it, and usually pretty well
  • NFS ipv6's like a champ
  • It's already working
  • Integrates well with pam, autofs, ldap
  • Vendor, while expensive, is really good at fixing it
  • ZFS allows 'thin provisioning' so that we can over subscribe. 
  • ZFS allows full nfsv4 acls to be used (This could also go in the drawbacks section because extended acls cause much pain)

Some key advantages we hope to achieve with ceph:

  • Clustering
  • Replication of data at the ceph layer instead of RAID
  • Authentication
  • Tiering of disks/storage
  • Setting different replication levels for different storage sets

The CephFS remote filesystem has capabilities roughly analogous to NFS. There is a single 'volume', it can be simultaneously mounted by multiple clients, it respects unix groups.

In the follow up posts to this one we will build out a test ceph cluster, build filesystems on it, mount them, and generally attempt to build feature parity with an NFS system.

Wednesday, July 9, 2014

Hipchat + Bitlbee + Irssi

I use irssi for open source. I use hipchat for work. It's pretty obvious that they should be combined.

There are a number of posts around the internet covering how to do this but I will throw my hat into the ring. Suffice to say, I'm standing on the shoulders of giants.

I use Centos 6 as my jump host. After a lot of head scratching I eventually discovered that the version of bitlbee in centos/epel is too old to work correctly. I thus had to compile my own. Yippie. Why aren't we just using FreeBSD again?

 ./configure --prefix=$HOME/local --ssl=openssl

Architecture: Linux

Configuration done:
  Debugging disabled.
  Building PIE executable
  Binary stripping enabled.
  Off-the-Record (OTR) Messaging disabled.
  systemd disabled.
  Using event handler: glib
  Using SSL library: openssl
  Building with these protocols: msn jabber oscar yahoo twitter

After  make and make install the software is installed in my home directory. Bitlbee installs to $PREFIX/sbin so be sure that is in your $PATH.

We can now fire up bitlbee:


[nibz@destiny]~% mkdir ~/.config/bitlbee
[nibz@destiny]~% bitlbee  -D -i 127.0.0.1 -p 2121 -d /home/nibz/.config/bitlbee
Warning: Unable to read configuration file `/home/nibz/local/etc/bitlbee/bitlbee.conf'.


Despite this troubling error bitlbee is running correctly:

[nibz@destiny]~% ps -ef | grep bitlbee                                        
nibz     31276     1  0 00:07 ?        00:00:00 bitlbee -D -i 127.0.0.1 -p 2121 -d /home/nibz/.config/bitlbee
nibz     31386  9444  0 00:07 pts/20   00:00:00 grep bitlbee

With this up we have bitlbee listening on localhost port 2121. Connect to this with your favorite irc client, I use irssi.
  
/server add -auto -network hipchat 127.0.0.1 2121
/connect 127.0.0.1
Now you must join the bitlbee control channel and proceed to do the quickstart tutorial stuff. Choose a very secure password because anyone able to connect to the bitlbee socket can attempt to log into your account. As far as I know there is no rate limiting.


/join bitlbee
help quickstart
help quickstart2

Next we follow the instructions here, here, here and here. Don't worry. I have my set below. You'll need more than just your hipchat credentials. You need to get your hipchat metadata from here. I'm not sure what the security status of that information is, but I've gone ahead and anonamized mine.


On the jabber xmpp settings page of your hipchat account, there is a 'username', a 'jabber id', and a 'conference (muc) domain'.

My username is of the format 8489_938318@chat.hipchat.com.

The first before the underscore, 8489, is your organization id. The second number is essentially your number.

Users on your domain will look like <Bob Dole> 8489_38239@chat.hipchat.com.

Channels/Rooms will look like #ScienceLab 8489_ScienceLab@conf.hipchat.com.

Notice that the channels are suffixed by the 'conf.hipchat.com' and the users are suffixed by 'chat.hipchat.com'. I believe this corresponds to the 'conference (mux) domain.'

Anyhoo this is how you connect:





account add jabber 8489_384193@chat.hipchat.com
account jabber set tag hipchat
account hipchat set nick_source full_name
account hipchat set resource bot

Now you have to feed it your password. Use the /oper command on your irc client, as if you were assuming irc network operator privileges. This will prompt you for your password, and your password will not be echoed to you. I do not know to what extent the password is encrypted on disk.

Now connect to the network:


account hipchat on

You'll see a long list of 'people' join. I think this is everyone in your domain. I think this is also the place where you private message people. As in, any messages coming to you in this domain channel are private messages from the person sending it and any messages you send are messages you send privately.




Now join your channel and set your nick to present as your real name to appease the gods.

chat add hipchat 8489_ScienceLab@conf.hipchat.com #ScienceLab
channel #ScienceLab set nick 'James Kirk'
/join #ScienceLab



And just like that. Boom. Hipchat through Irssi.



 

Monday, July 7, 2014

Giving a great talk



These are a few of my thoughts on how to make any talk you're giving great.


  • Practice! (something like 9/10 people don't do their presentation ONCE before giving it)
  • Practice doesn't just mean run through the whole thing. It means trying little paragraph-sized parts over and over again until you get the timings right. This can be done in the shower, while running, on the commute, whenever! These short, punchy parts of your talk will have the biggest impact on your audience. 
  • The title of your talk should be short and to the point. "Intermediate Graphite" is good. "Automating metrics with graphite and collectd: How I learned to stop worrying and love the data" is bad. For one, every talk these days seems to be titled off of a movie. For two, your talk title will be printed on a tightly-jammed conference schedule, compressed into 30 pixels on a mobile app, and the very top of your title slide, making it longer makes it harder to tell what's going on. For three, shorter phrases just pack more punch. You don't have to tell everyone what will be covered in your talk in the title, you can let the title be mysterious. This will both bring in more audience members who want to see what is going on, and increase your chances of being accepted to talk because conference organizers will be curious as well. 
  • Carry in your own water, a bottle, a pitcher, several cups. Whatever. Just make sure you are well prepped for hydration. Rule of thumb is 1 8 oz cup per half hour of talking. This water should be room temperature. Don't drink cold water before or during your talk. Cold water freezes the vocal chords. 
  • Take water sips in between paragraphs or sections. Stopping mid point or sentence to drink pulls your audience out of the narrative. 
  • Print your speaker notes. I'm serious. Kill a tree. You do not want to be at the mercy of your laptop, conference wifi, or power problems. 
  • Most projector systems are powered of VGA. Make sure your laptop has VGA out or you have the appropriate dongle. Don't loan this dongle to anyone. Guard it with your life. 
  • Give your laptop a full charge and resist using it until showtime for your talk. 
  • Have a local copy of your talk on your laptop, if using google-docs, download a pdf and present from that. 
  • Email yourself a copy of the talk. If your laptop totally sparks up on the day of, you can use your phone to quickly forward the talk to a buddy who can set it up on their laptop. 
  • Use the restroom before your talk. Wash your hands, splash some water on your face. Get pumped. Men: sit down for this bathroom break, this is no time to tempt fate. 
  • Bring a friend to the talk. If you're alone at a conference, make a friend and have them come to the talk. Install this person in the front row nearest to you. 
    • If this person is paying rapt attention, the people behind them will instinctively follow suit. The same is true if that person is playing with their phone. 
    • This person can ask a question at the end of the talk, even if no one else does. 
    • If you have projector trouble, after 3 minutes of working on it, you can begin your talk from your printed notes and your buddy can fight with the projector. 
  • Think about the best teacher you had in high school or college. What did they do in their lectures to engage you? 
  • Slides suck. But you have to use slides. Most audience members expect slides and if you literally don't have any, the audience will be confused, and it may become unclear when the talk is starting and stopping. 
  • I am a big fan of split talks. Start the presentation with 5-10 slides. These slides introduce you and your content, define terms, provide links and generally set up the talk. Then you can go to the whiteboard/chalkboard or draw on your tablet, or step out from behind the podium and give a more personal, engaging talk. 
  • Start the slide deck with a graphic or gif slide that fills the whole screen, usually this is a funny picture or a gif of a fireplace. This allows everyone to relax, including you. It is obvious that the talk hasn't officially started yet. It provides a good color and sizing match for when you are setting up the projector. If there is trouble with the projector, the humorous image you are trying to present will take the stress off of everyone. 
  • The next slide should be the title page of your talk. This should include your name, your contact information, and your affiliation (work, school, foundation, etc.) Some percent (0.5% or so) of people who will see your talk will find something wrong with it and NEED to correct you. Providing a twitter handle or an email address gives people an outlet to contact you with questions, comments, and criticism. This is vastly preferable to someone raging on the comments section of the youtube/slideshare or whatevers.
  • You should expressly invite feedback in your dialogue, because we all need feedback as a speaker.
  •  Depending on the context and topic of your talk, you should provide the audience with your slides. For open source conference talks, this is as easy as putting your slides up on slideshare.net.  For more private talks, emailing out a pdf of your slides is appropriate.  
  • During question time, always repeat the question. This will take time and practice. In todays world, most talks are video recorded and put up online. Repeating the question is the only way for the at-home viewer to know what the question you are responding to is.  
  • At question time, you don't need to be an expert at everything. Often people will ask you a question you don't know the answer to or know that some friend of yours in the audience is the person to ask. You can call on this person. This has the nice effect of making the talk feel more like a dialogue. Still, you should give this person a few seconds to prepare and swallow their food and so forth. Say something like "You know that question is really best answered by my friend <friendname>. We've known each other since we worked at <company> together, where we developed <something random>. Since then they're now working at <new company name> and has become an expert in <question>. <friendname>, do you have an answer to <repeat question>."
  • Your last slide should be 'Questions?' and your name/contact information/affiliation.

Wednesday, June 11, 2014

Puppet module tool ssl error

Disclaimer: This post is trivial and is mostly just here so google can help people solve this problem a bit faster.

When trying to install a puppet module using the puppet module tool I go the following error:

root@puppetboard:~# puppet module install nibalizer-puppetboard                                                                                                            
Notice: Preparing to install into /etc/puppet/modules ...
Notice: Downloading from https://forgeapi.puppetlabs.com ...
Error: Could not connect via HTTPS to https://forgeapi.puppetlabs.com
  Unable to verify the SSL certificate
    The certificate may not be signed by a valid CA
    The CA bundle included with OpenSSL may not be valid or up to date




I ran the obvious to try to fix it:

root@puppetboard:~# sudo apt-get upgrade
Reading package lists... Done
Building dependency tree      
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@puppetboard:~# sudo apt-get dist-upgrade -y
Reading package lists... Done
Building dependency tree      
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

But didn't find the answer until I started using openssl to debug:


root@puppetboard:~# openssl s_client -connect foregapi.puppetlabs.com:443
-bash: openssl: command not found
root@puppetboard:~# sudo apt-get install openssl
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages were automatically installed and are no longer required:
  bind9-host libdns81 libisccc80 liblwres80 libbind9-80 geoip-database libgeoip1 libisccfg82 libisc83
Use 'apt-get autoremove' to remove them.
Suggested packages:
  ca-certificates
The following NEW packages will be installed:
  openssl
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 523 kB of archives.
After this operation, 923 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ precise-updates/main openssl amd64 1.0.1-4ubuntu5.14 [523 kB]
Fetched 523 kB in 2s (221 kB/s)  
Selecting previously unselected package openssl.
(Reading database ... 14469 files and directories currently installed.)
Unpacking openssl (from .../openssl_1.0.1-4ubuntu5.14_amd64.deb) ...
^PSetting up openssl (1.0.1-4ubuntu5.14) ...
^C^C^C^C^C



Aha! In the 'Suggested Packages' section the ca-certificates package is suggested and that means that package is not installed.

I installed that package and everything went back to working. This is basically caused because the lxc template for precise doesn't bring the ca-certificates package along with it. Awesome. Strong job ubuntu. Way to go. Those 15k of files were really slowing people down.



Wednesday, April 23, 2014

Guest post on the Puppet Labs blog

Today I was honored by being asked to write a post on the official Puppet labs blog. That post is here. I wrote on my continuing efforts to use Puppet in a rootless environment.

Anyone looking to do more work in the rootless environment should check out Cory Osman's nonroot puppet repository here. You can see the rootless stdlib UTi is developing here.

And Daenny is putting together a group blog called puppet-a-day. I recommend it for staying current on Puppet best practices, and if you have cool things I recommend you submit them there!

Monday, April 21, 2014

Ganeti cloud init

Nightfly19 created a ganeti os definition based on our work with images last week. It is available now from https://github.com/pdxcat/ganeti_cloud-init.

This is a simple wrapper that lets you use images with ganeti virtual machines. This is huge because you can use prebuilt images from operating system vendors. Most OS distributions are crawling over each other trying to provide openstack-capable images. Now these images and convenience are available to the ganeti world.

From the readme:


The advantages to installing virtual machines from cloud images are:
  • Speed: This is much faster than using PXE boot or cdrom boot to load virtual machines.
  • Consistency: These images will be identical to amazon AMIs or machines running on an openstack cluster.
  • Lower effort: The OSUOSL Ganeti-instance-image project can produce a similar effect, but requires the administrator to provide their own dump/image files. These images can be pulled from the internet.

We hope to improve the functionality and installation instructions in the next couple of weeks. But for now, we'd love for some other ganeti users to try it out and let us know if it works in your environment, and what we can do to make it more generally usable.

Thursday, April 17, 2014

Using cloud images with Ganeti


Ganeti is an open source private cloud platform. It is primarily developed by Google. It is written in python, with bits of haskell thrown in for flair.

Virtual machine creating in ganeti is done through debootstrap or traditional methods such as PXE loading or cd-rom/iso interactive installations.

This approach is slower and more error prone than the image-based VM creation favored by other private cloud software such as Openstack.

Most major distributions that identify as 'cloud ready' produce regular 'cloud images' to be deployed to public clouds like Amazon or private clouds such as Openstack.

Out of the box, ganeti does not support these cloud images. This post will describe our super janky way of getting them to work. We present a method for launching ganeti vms quickly from stock cloud images. Future work may come of this where we develop what ganeti calls a 'os install template.' If we do that, expect a follow up post on this blog.

This work has been done in collaboration with nightfly19.


First, we collect the Ubuntu cloud image from the web:

# wget 'http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img'

This file is a disk image containing a disk label, a bootloader, a partition table, and a partition with the root filesystem of the ubuntu server os. From the README:


- <prefix>-disk1.img
  This is a qcow2 compressed disk image.  It can be uploaded to OpenStack
  or booted directly via kvm.  You should probably uncompress the image
  (qemu-img convert) before use in non-testing environment.



To prove this, lets inspect it with qemu-img:

# qemu-img info precise-server-cloudimg-amd64-disk1.img
image: precise-server-cloudimg-amd64-disk1.img
file format: qcow2
virtual size: 2.2G (2361393152 bytes)
disk size: 248M
cluster_size: 65536




This file is in the format qcow2 which stands for qemu copy on write version 2. This is the file format used by kvm with qemu. Since we're using ganeti  of raw kvm, we need to apply this image to the disk of a ganeti virtual machine. Furthermore, this image is compressed so we will first uncompress it:


# qemu-img convert -O qcow2 precise-server-cloudimg-amd64-disk1.img precise64.img

This command will take several seconds to run.

# ls
precise64.img  precise-server-cloudimg-amd64-disk1.img

Now we must convert the uncompressed qcow image into a raw disk, the exact byte for byte representation of a block device, suitable for dding directly onto the ganeti instance block device.

# qemu-img convert precise64.img -O raw precise64raw.img

We can then verify that the precise64raw.img file contains valid partition data:


# fdisk -l precise64raw.img

Disk precise64raw.img: 2361 MB, 2361393152 bytes
4 heads, 32 sectors/track, 36032 cylinders, total 4612096 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0000a23e

           Device Boot      Start         End      Blocks   Id  System
precise64raw.img1   *        2048     4194303     2096128   83  Linux


This precise64raw.img can be reused every time we want to create a new virtual machine.


The file we'll use to customize each instance of the ubuntu cloud vm is created by a script cloud-localds. On newer ubuntu releases, this script is available from packages. On older releases download it from here: https://gist.github.com/cpswan/6221258

This configuration is part of an ubuntu specification called NoCloud. You can research it for more control flags.


#cat > my-user-data <<EOF
#cloud-config
password: passw0rd
chpasswd: { expire: False }
ssh_pwauth: True
EOF


# /usr/local/bin/cloud-localds my-seed.img my-user-data
wrote my-seed.img with filesystem=iso9660 and diskformat=raw







With these two images created (my-seed.img is very small,) we can create the ganeti instance. Assuming you have set up networking in your environment, you can use this:

# gnt-instance add --disk-template plain -o image+default --disk 0:size=10g --net 0:mac="cc:aa:77:bb:1c:2d" -H serial_console=true --no-install --no-start solarium.cat.pdx.edu 
Thu Apr 17 01:16:04 2014  - INFO: Selected nodes for instance solarium.cat.pdx.edu via iallocator hail: kvm-milkyway.cat.pdx.edu
Thu Apr 17 01:16:05 2014 * disk 0, size 10.0G
Thu Apr 17 01:16:05 2014 * creating instance disks...
Thu Apr 17 01:16:05 2014 adding instance solarium.cat.pdx.edu to cluster config
Thu Apr 17 01:16:07 2014  - INFO: Waiting for instance solarium.cat.pdx.edu to sync disks
Thu Apr 17 01:16:07 2014  - INFO: Instance solarium.cat.pdx.edu's disks are in sync

# gnt-instance modify --disk add:size=1g solarium.cat.pdx.edu
Thu Apr 17 01:20:07 2014 * disk 1, size 1.0G
Thu Apr 17 01:20:08 2014  - INFO: Waiting for instance solarium.cat.pdx.edu to sync disks
Thu Apr 17 01:20:08 2014  - INFO: Instance solarium.cat.pdx.edu's disks are in sync
Modified instance solarium.cat.pdx.edu
 - disk/1 -> add:size=1024,mode=rw
Please don't forget that most parameters take effect only at the next (re)start of the instance initiated by ganeti; restarting from within the instance will not be enough.









Make sure the instance is down:



# gnt-instance list

solarium.cat.pdx.edu        kvm        image+default     kvm-milkyway.cat.pdx.edu   ADMIN_down      -



Activate its disks:

# gnt-instance activate-disks solarium
kvm-milkyway.cat.pdx.edu:disk/0:/dev/ganeti/78f5c111-f6a0-4446-a590-66ed1d6c2b04.disk0
kvm-milkyway.cat.pdx.edu:disk/1:/dev/ganeti/4a9a53e0-e2b7-49c7-a09f-d0dafda9fb28.disk1

DD the images to the disks:


# dd if=precise64raw.img of=/dev/ganeti/78f5c111-f6a0-4446-a590-66ed1d6c2b04.disk0
4612096+0 records in
4612096+0 records out
2361393152 bytes (2.4 GB) copied, 68.8486 s, 34.3 MB/s

# dd if=my-seed.img of=/dev/ganeti/4a9a53e0-e2b7-49c7-a09f-d0dafda9fb28.disk1
732+0 records in
732+0 records out
374784 bytes (375 kB) copied, 0.0289169 s, 13.0 MB/s

Now deactivate the disks and start the instance:

# gnt-instance start solarium
Waiting for job 4670 for solarium.cat.pdx.edu ...

This machine by default will write all its output to the serial console. Watch this with:

# gnt-instance console solarium

<snip>

This will show a combination of standard dmesg from a system booting and the output of the cloud-init/NoCloud scripts. Part of what will be echoed are the ip address and routing table of the instance:

mountall: Event failed
ci-info: lo    : 1 127.0.0.1       255.0.0.0       .
ci-info: eth0  : 1 131.252.208.5   255.255.255.0   cc:aa:77:bb:1c:2d
ci-info: route-0: 0.0.0.0         131.252.208.1   0.0.0.0         eth0   UG
ci-info: route-1: 131.252.208.0   0.0.0.0         255.255.255.0   eth0   U
cloud-init start running: Thu, 17 Apr 2014 08:38:07 +0000. up 4.66 seconds

Then you will see the host key randomart pictures for ssh as well as the fingerprints:


ec2: #############################################################
ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
ec2: 1024 9e:55:82:94:82:db:59:c1:60:1f:94:38:3b:36:af:83  root@ubuntu (DSA)
ec2: 256 22:b5:0c:45:83:5e:ee:a4:fb:4f:96:c7:01:88:26:7a  root@ubuntu (ECDSA)
ec2: 2048 cf:0f:ea:cf:ff:89:ed:7d:f5:cb:3e:48:e1:9e:2d:72  root@ubuntu (RSA)
ec2: -----END SSH HOST KEY FINGERPRINTS-----
ec2: #############################################################
-----BEGIN SSH HOST KEY KEYS-----
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAQAEcZ08DnGNr6HF8UcRsOge98sYVOm37sSZyj+NplKtBl+oL6lqCmwqgNwrz1F497ym2XVT7IEpXJ5NyuMqsk= root@ubuntu
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCvIu5vC8FaGe4Hk+A280VGi07CsXFiDXpuA5q2ajTnMKa0Wy8PSWArZthSZrYpeoHYFL0ac+8mxfQMv8wafnxSiNkaYLOQ1GZr0lZr08oGry9FzD4w+kzqRudmE29aivffmNj4Qe
J6P86nQ8j/0M0b7nQhn7SLKJENQzuucsygVaPptRIu+Ntuap+F2sALPVN+oDpVt/IOLWnGg7ht5dG669OadLFbVebVhITNVah4CKSHmZJ37LqA64ArTpCw8bT60074yjlhDxLttUurVOFitnPaJfNzH8wBQzdN5GeU1Yy7iTNs6wht
vLn8pW+xt4rCHhlbGoEsx5sUzbscrx4p root@ubuntu
-----END SSH HOST KEY KEYS-----


The machine should dhcp itself and be  ready to accept password authentication momentarily.



# ssh ubuntu@solarium.cat.pdx.edu
The authenticity of host 'solarium.cat.pdx.edu (131.252.208.5)' can't be established.
ECDSA key fingerprint is 22:b5:0c:45:83:5e:ee:a4:fb:4f:96:c7:01:88:26:7a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'solarium.cat.pdx.edu,131.252.208.5' (ECDSA) to the list of known hosts.
ubuntu@solarium.cat.pdx.edu's password:
Welcome to Ubuntu 12.04.4 LTS (GNU/Linux 3.2.0-60-virtual x86_64)
...





This was a pretty rough cut of the process. We hope to streamline it and make software that evens this process out. One obvious place to look for making things faster is placing the precise64raw.img file in /dev/shm so that we read from memory and write to disk, instead of reading and writing from disk.


References:

http://cloud-images.ubuntu.com/daily/server/precise/current/
https://help.ubuntu.com/community/UEC/Images
http://docs.ganeti.org/ganeti/master/html/design-ovf-support.html
http://docs.ganeti.org/ganeti/master/html/ovfconverter.html
http://serverfault.com/questions/466269/ganeti-installing-centos-rhel-instance
https://groups.google.com/forum/?fromgroups=#!topic/ganeti/YH_5pM8rAvc
http://openstack.redhat.com/Image_resources
https://nsrc.org/workshops/2014/sanog23-virtualization/raw-attachment/wiki/Agenda/ex-ganeti-create-vm.htm
http://www.slideshare.net/ramereth/ganeti-handson-walkthru-part-2-linuxcon-2012
https://help.ubuntu.com/community/UEC/Images
https://gist.github.com/cpswan/6221258





Thursday, April 3, 2014

RHEL 7 and Installing EPEL

RedHat 7 is nearly here. It is out in beta right now. This means that I've installed it for reasons that are clear to no one. It also means we'll get a number of posts related to making it do the most basic things.





Notably, installing EPEL 7 right now is not obvious. Typically there is a .rpm file that will install the repository for you. That is true in this case but the link to it is hard to find.

Additionally, not all packages from EPEL 6 are in EPEL 7. If you find a package you need, reach out to the maintainer to get it migrated. The process is relatively painless.



To install EPEL on Red Hat Enterprise Linux 7 Beta:


wget 'http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.1.noarch.rpm'


nibz@computer$ wget 'http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.1.noarch.rpm'
--2014-04-03 14:30:34-- http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.1.noarch.rpm
Resolving dl.fedoraproject.org (dl.fedoraproject.org)... 209.132.181.27, 209.132.181.23, 209.132.181.24, ...
Connecting to dl.fedoraproject.org (dl.fedoraproject.org)|209.132.181.27|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13200 (13K) [application/x-rpm]
Saving to: ‘epel-release-7-0.1.noarch.rpm’

100%[======================================>] 13,200 84.0KB/s in 0.2s 

2014-04-03 14:30:35 (84.0 KB/s) - ‘epel-release-7-0.1.noarch.rpm’ saved [13200/1

 Then install the rpm file:

[skrum@pdxudev98 Downloads]$ yum update
Loaded plugins: langpacks, product-id, subscription-manager
You need to be root to perform this command.
[skrum@pdxudev98 Downloads]$ sudo !!
sudo yum update
Loaded plugins: langpacks, product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
epel/x86_64/metalink                                     |  14 kB     00:00    
epel                                                     | 3.7 kB     00:00    
(1/2): epel/x86_64/group_gz                                | 243 kB   00:00    
(2/2): epel/x86_64/primary_db                              | 1.7 MB   00:04    
No packages marked for update

Wednesday, April 2, 2014

Pro Puppet 2nd Edition

I wrote a book! Well, me and some people wrote a book. The original is by James Turnbull and Jeff McCune. The update is by myself, William Van Hevelingen, and Ben Kero.




Pro Puppet 2nd Edition is an update to the original Pro Puppet, newly updated for Puppet 3.

We cover a number of topics new since the original came out:


  • Hiera (entire chapter devoted to this topic)
  • Puppet DB
  • Puppet Enterprise
  • PuppetBoard
  • r10k
  • rspec-puppet
  • rspec-system (now called Beaker)
And we have a updated material on other topics such as:

  • Puppet language
  • Puppet syntax
  • Puppet modules
  • Best practices
  • Scaling Puppet
  • Exported resources
  • Marionette Collective (MCollective)
If you have any feedback on this title please drop me a line.

The source code for the examples can be found here:

https://github.com/pro-puppet/source-code

And the eratta is being collected at:


https://github.com/pro-puppet/pro-puppet-errata