short thoughts


Manage Windows 2003 and 2008 Firewall rules with Chef

posted Jun 6, 2012, 8:01 PM by Steve Craig

Introducing Cerberus - the Microsoft Windows 2003 / 2008 firewall manager for chef

Smashrun runs primarily on a combination of Windows 2003 and Windows 2008 servers, with vmware as the primary virtualization platform.  The infrastructure supporting smashrun is thin, as the company operates on a lean shoe-string budget.  As such, primary access control is done on the nodes themselves, rather than centrally on dedicated hardware firewall devices.  Additionally, there is a certain amount of dynamism to the smashrun environment (servers, workstations, and laptops change IP addresses not infrequently) that needed to be taken into account when thinking about providing network access.

This all meant that a solution to easily managing windows firewall rules needed to be created.  Since chef manages other aspects of smashrun's servers, why not the windows firwall rules?  Thus Cerberus was written, in order to simplify the process of managing windows firewall rules.

The key thought behind Cerberus' modus operandi was simple: define the permitted ports and protocols in one databag, and then the permitted IP addresses and ranges in another.  Any IP inside the ip_permit databag would have access to any of the declared ports.

Windows Firewall now comes in two flavors: "netsh firewall", which is version 1 for Windows XP and 2003; and, "netsh advfirewall", which is version 2 for Visa, Windows 7, 2008 and beyond.  The Cerberus cookbook takes two completely different approaches to implementing firewall rules - playing to the particular available strengths, in my humble opinion - for Windows 2003 and 2008 ... while maintaining only one unified location and format for declaring firewall rules (a la Chef data bags).

So, for Windows 2003, there are a number of ways to manage firewall rules:
- via Group Policy (the preferred method for AD sites)
- during server build (unattend.ini file)
- netsh via settings from the Netfw.inf file (found in the following location: %windir%\Inf\Netfw.inf)
- netsh via the commandline for each individual rule

Each of these is not without merit; however, the clear winner for Windows 2003, fitting quite nicely with my general "hybrid approach" to utilizing Chef, is "netsh with settings provided by netfw.inf file".  My "hybrid approach" to utilizing Chef for configuration management is simply to have Chef utilize tools that a human operator could use themselves.  I like the idea of generating batch files, vb scripts, powershell, or SQL (via erb templates) that Chef can maintain and then can be used by human beings as well.

The other two nice things about the netfw.inf method are:
1. it lends itself nicely to version control
2. Chef's built-in natural excellent template handling ensures that the only time the netsh command is run is when the template (firewall rules) actually changes

So that is how it goes with Windows 2003.  A basic template contains the general rule framework, and is fleshed out with the contents of two data bags: one bag that contains the permitted IPs (either hosts or network ranges), and the second bag that contains the permitted ports and protocols.
When the Windows Firewall log file is enabled, Windows Firewall generates a plaintext security log file (Pfirewall.log), which is found in %Windir%\pfirewall.log. The security log has two sections: the header and the body.

For Windows 2008, Microsoft removed the netsh.inf functionality, and also began storing firewall rule references inside the registry.  These two things complicated matters a bit; however, my cookbook "kronos," which manages Windows 2003 and 2008 Scheduled Tasks needs to support similar requirements for Windows 2008, so I borrowed a bit from that.

The Windows 2008 version of cerberus uses the exact same data bags as 2003; however, all rule names are prefixed with "cerberus_" and those managed rules are destroyed and re-created on each run.  In some ways, this is not quite as elegant as the templated method used by Windows 2003 (y u do extra work?) but it does have the advantage of supporting the existence of both managed and non-managed firewall rules.



Thanks for taking the time to stop by!

Manage Windows 2003 and 2008 Scheduled Tasks with Chef

posted May 21, 2012, 7:51 PM by Steve Craig

So, I'm a fan of Opscode and Chef, as you may have gathered from my previous posts: I wrote a cookbook to install magiciso, I wrote up how I manage Microsoft SQL Server Backups, why I went with Chef instead of puppet, and did a quick summary on how to setup your MacOSX workstation to manage Windows Chef nodes.

My third release is Kronos, a cookbook to manage Microsoft Windows 2003 and 2008 Server Scheduled Tasks with Chef.  The overall design is simple, and locates most settings inside databags.  Kronos supports both Task Scheduler version 1 and 2.

Windows 2003 / Task Scheduler v1 integration is simple and straightforward: chef deletes all scheduled tasks each run, and then recreates them all from databag settings.  This has two main limitations: first, task history is deleted along with the settings; and second, no "unmanaged" tasks are permitted, as they will be purged at the beginning of each chef run.  Clearly this is suboptimal.  Kronos for Windows 2003 does, however, support nearly every Task Scheduler v1 setting option.

Windows 2008 / Task Scheduler v2 integration is a little more advanced, and handles both these cases.  Kronos for Windows 2008 permits the existence of "unmanaged" tasks, and only ensures that each managed task is deleted and recreated on each run.  Task settings are stored in the registry, so Kronos for Windows 2008 is a bit more complicated to support that aspect.  Kronos for Windows 2008 does not support all the available Task Scheduler v2 settings yet.

smashrun.com has been running kronos in production now for about a year and have run into no major issues.  It has proven to be a reliable and straightforward method of ensuring tasks are appropriately scheduled.

Chef install Magic Disc to mount ISO files on windows

posted Mar 31, 2012, 11:20 AM by Steve Craig

This one was easy... but it'll sure be nice to know that I won't have to manually install some application just to mount ISO's on my windows machines.  Crazy to me that Microsoft hasn't taken care of this.  Oh well.

Enjoy my second public chef cookbook, available HERE on the opscode.com cookbook site.

chef microsoft sql server backup

posted Dec 17, 2011, 1:56 PM by Steve Craig   [ updated Dec 18, 2011, 5:37 PM ]

Recipes for Microsoft Windows Server 2003 with MSSQL Database Server 2005 backup and restore with chef data bags

Summary:
Backups in general are a critical operations task.  With databases in particular both the data contained inside the database as well as the transaction logs must be backed up or else point-in-time recovery is not possible.  With a full backup and an unbroken transaction log chain, however, the ability to recover a database to a recent point-in-time after a disaster is near-trivial.

The technologies used here are Windows Server 2003 running Microsoft SQL Server 2005 with Ruby 1.8.7 and Chef 0.10, yet most of the concepts will apply to newer Microsoft technologies ... with the notable exception of scheduled tasks.

Basic MSSQL database backup and restore/refresh activities should be fully automatic so that human intervention is un-necessary on a day-to-day basis.  Furthermore, "refresh" (importing databases from one environment {production} to another {qa}) should utilize essentially the same functionality as the basic "restore" (recovering a database from a backup copy) functionality.  Again, minimal-to-no human interaction should be required for specific restore scenarios.  Leverage simple batch files, text lock files, and windows scheduling to enable Chef to perform the grunt work.

This post focuses on two things: how to perform daily full database and hourly transaction log backups under Microsoft's "full recovery model" as well as how to recover the database from those backups.  There are two main paradigms for backup and restore of Microsoft SQL server: full and simple.  If you are unaware or unsure of the purpose of transaction logs, the full recovery model is not for you.  Use the simple recovery model, take a full backup of your database nightly and call it a day.  However: if you will ever need the ability to recover your database to a particular point-in-time, or need to close the potential data-loss window after a disaster to a minimal amount of time (minutes, not hours) then you will need the full recovery model.  Read on.

The goal:
Enable Chef to perform full MSSQL backups under the "full recovery model" on a daily basis with transaction log backups every hour.  Compress the backups and move them offsite.  Enable Chef to create empty databases and then perform MSSQL refresh and restore operations in an automated, scheduled fashion.

Left unsaid:
This post will gloss over certain bedrock Chef-related items such as installing and configuring chef agents on your servers and writing basic chef recipes along with the creation of templates and data bags.  The opscode.com wiki is instructive in these matters, and I have also written bog posts on some of the basics of setting up chef HERE.  It skips over the details many Microsoft-related items, such as command line utilities for executing transact-sql and basic philosophies for MSSQL database configuration.

Key information:
Before one begins automating MSSQL backup and restore functionality under the "full recovery model", note the single most important concept and the two primary "gotchas" that go along with it:

- The single most important concept to understand in regards to "full recovery model" MSSQL backups is that the LSN (Log Sequence Number) is EVERYTHING.  Microsoft documentation is wordy and obtuse on this point.  Do not get caught up in the particulars of Microsoft's musty documentation.  Understand only that you MUST be able to query the LSN of the backup files that you are generating and know how to sequence them such that there is an unbroken transaction log file chain.  If the translogs are out of order or missing, you will only be able to restore to the break in the LSN.

THIS post has a summary of LSN, as well as the technical specifics of the transact-sql necessary to query LSN information of backup files.  These transact-sql commands are basic and important.  Do not fail to read the article HERE.  There are two methods of querying for LSN, depending on if you have access to the original database that spawned the backup files, or only have access to the files themselves.  Understand how to use both methods.  Literally, if you are attempting to code your own solution while referring to my post, take a break and get cozy with LSN now.  It WILL save you stress later.

1. if you have access to the MSSQL server that spawned the backups, use this transact-sql:

select database_name, type, first_lsn, last_lsn ,checkpoint_lsn ,database_backup_lsn from msdb..backupset where database_name = 'YOURDBNAME'

2. if you have access to only the backup files themselves, use these transact-sql commands to grab the information from the most recent full database backup, and then the translog backups afterwards:

-- investigate the headers of your fullbackup file ".bak"
RESTORE HEADERONLY from disk='c:\YOURDBNAME.bak'
-- investigate the headers of your translog backup files ".log"
RESTORE HEADERONLY from disk='c:\YOURDBNAME-1.log'
RESTORE HEADERONLY from disk='c:\YOURDBNAME-2.log'
--
RESTORE HEADERONLY from disk='c:\YOURDBNAME-n.log'

Now that you know how important LSN is to the full recovery model, and how to query LSN information from either the source database or the backupfiles, here are the two primary GOTCHAS when working with LSN inside the full recovery model.

1. The most recent, uncompressed full backup file must be available when translog backups run.  So, if you want your backup routine to "cleanup" after itself by compressing the backups and moving them offsite, you will need to ensure that you leave the most recent uncompressed full backup behind.

2. By default, microsoft MSSQL backup functionality concatenates backups into an existing backup file.  While working inside the "full recovery model" for the first time, the default concatenate behaviour will normally surface in strange translog backup file sizes: ie, translog files that are uniformly large and seem to contain all the changes since the last full backup, rather than simply the changes since the previous translog backup.  For those of us with a rigorous compress-and-move-offsite backup strategy, this default concatenate behaviour is unwanted and unnecessary {IT'S CRAP, MICROSOFT!}.  Ensure you utilize the "WITH FORMAT" and/or "WITH INIT" transact-sql options when creating your backups to prevent this crazy concatenation.


So, what does all that look like on my windows servers? Onwards, opschef recipes! #devops, ho!

CREATEDB.RB
createdb.rb recipe: https://gist.github.com/1491334
createdb.rb attribute: https://gist.github.com/1491338
createdb.rb template1: https://gist.github.com/1491439
createdb.rb template2: https://gist.github.com/1491443

First, if we are going to take the time to backup and restore our databases in an automated fashion, it makes sense to first enable creation of databases from scratch as well.  If databases are going to be created by Chef, one natural location (although there are many) for their configuration information is a data bag.

The "createdb.rb" recipe utilizes information from two JSON data bags to create MSSQL databases on a target host.  It runs unless a lockfile  exists.  This is to prevent round-trip data bag searches from occurring on each Chef run, not to prevent MSSQL funkiness - MSSQL will fail to create a new database if one already exists with the requested name.  The first databag "running_database" contains a simple list of the names of the database that the host should run, and the second databag "database_information" contains the settings for each database.  "createdb.rb" is not currently fully-baked in some straightforward ways.  An in-depth discussion of createdb.rb's current shortcomings is beyond the scope of this blog post.  Any suggestions to clean/improve it are welcome.

After grabbing the necessary configuration information from the data bags, the "createdb.rb" recipe utilizes a dual-template method that I come back to in many of my windows recipes: one template contains the batch file to execute the necessary commands, and another template contains the payload information.

BACKUPDB.RB
backupdb.rb recipe: https://gist.github.com/1491332
backupdb.rb attribute: https://gist.github.com/1491326
backupdb.rb template1: https://gist.github.com/1491439
backupdb.rb template2: https://gist.github.com/1491453
backupdb.rb template3: https://gist.github.com/1491458

"backupdb.rb" is the main recipe inside of a larger "backupdb" role.  This role also contains a "store-registry.rb" recipe (not included here), which inserts ssh keys into the node's registry and enables putty to transfer files across the innertubes via pscp as well as an "rsyncdb.rb" recipe that compresses the resulting backup files enables rsyncs to other nodes inside the same datacenter (referenced below).  The "rsyncdb.rb" recipe in turn depends on the "deltacopy.rb" recipe for the installation and configuration of a windows rsync daemon.  The "backupdb" role currently runs on the production database server.

Naturally, the "backupdb.rb" recipe utilizes the database configuration information located inside the previously-mentioned data bags, a few other settings inside application-specific attribute files, and some recipes that install supporting software that no windows server should be without (sevenzip, putty, deltacopy).  "backupdb.rb" will backup all the databases listed inside the running_database data bag for the node it runs on.

The paradigm is simple and similar to the "createdb.rb" recipe: only run if no lockfile exists.  Each of the database backup types (full backup and translog backup) has it's own lockfile (the log of the database backup).  These lockfiles is deleted on an appropriately scheduled basis (once a day for the full backup lockfile, and hourly for the translog backup lockfile), which permits chef to cook the recipe when the lockfile is missing.  The flow is for each database backup type to first grab the necessary database settings from the data bags if there is no lockfile; second, ensure the two templated files exist and are up to date (the batch file to execute the transact-sql statements as well as the specific backup sql to be run) and execute them.

The output of this is the database backup file and logging information (which is used as a lockfile). in the interest of simplicity, "backupdb.rb" is concerned only with getting a valid db dump to disk: no post-processing or other after the fact work is done here.  Tasks such as compressing the database dump and moving it offsite are handled by other recipes.  It is of course possible for the database backup to fail for any number of various reasons and still create a log/lockfile with this method, which would prevent the backup routine from running until the next scheduled run.  This is sub-optimal in some ways; however, if the backup report is logged to a database/emailed/tweeted/posted to a ticket and reviewed, operators can control for the risk.

RSYNCDB.RB
rsyncdb.rb recipe: https://gist.github.com/1491351
"rsyncdb.rb" again utilizes the same data bags for necessary settings and only runs if its lockfile does not exist.  This lockfile is deleted via a scheduled task, as above.  Devoid of the presence of this lockfile, the node will gather necessary settings and stamp the backup files such that they can be restored later.  The full database backup is always labeled "zero" and each translog backup is stamped with the hour that it was created.  This makes restoring the database a snap.  As previously mentioned, database backups are rsync'd from the primary dbserver to any number of other locations within the same facility.  In my case this is primarily to enable nightly refreshes from prod to qa.

RESTOREDB.RB
restoredb.rb recipe: https://gist.github.com/1491376
restoredb.rb attribute: https://gist.github.com/1491382
restoredb.rb template1: https://gist.github.com/1491439
restoredb.rb template2: https://gist.github.com/1491483

The "restoredb.rb" recipe can run on any database server.  It's current primary function (in addition to enabling database restores if necessary on production) is to enable nightly database refreshes on qa.  The "restoredb.rb" recipe is the first recipe inside a "restoredb" role that also includes a "postrestoredb.rb" recipe.

The "restoredb.rb" recipe contains a list of databases that should be restored/refreshed in-line.  This is sub-optimal.  It should utilize the data bags that the other recipes use.  Like everything else, it runs unless a lockfile exists.  This lockfile is deleted every 24 hours on QA, and also contains a reference to an internal application version number.  This enables a database refresh to occur after a different version of code is pushed to the app servers.

POSTRESTORE.RB
postrestoredb.rb recipe: https://gist.github.com/1491403
postrestoredb.rb template1: https://gist.github.com/1491439
This recipe is contained inside the "restoredb" role, after "restoredb.rb", and is mainly utilized to bring the QA database that was just refreshed with production data "up-to-speed" with schema changes that support the newer code running on QA. The meat of this recipe is similar to the others: "postrestoredb.rb" runs unless a lockfile exists.  Right now, it contains specific references to a database name mainly because only one particular database requires post-restore scripts to roll it forwards to the current qa schema, which is slightly ahead of production.


ASSOCIATED FILES (templates):

createdb.rb
- templated batch file to wrap around the arbitrary T-SQL execution: https://gist.github.com/1491439
- templated T-SQL script to create databases:  https://gist.github.com/1491443

backupdb.rb
- templated batch file to wrap around arbitrary T-SQL execution: https://gist.github.com/1491439
- templated T-SQL script to do full database backups: https://gist.github.com/1491453
- templated T-SQL script to do translog database backups: https://gist.github.com/1491458

rsyncdb.rb
- utilizes much functionality not covered here: sevenzip, deltacopy, and a twitter library.  Chef takes care of those for me as separate matters!

restoredb.rb
- templated batch file to wrap around arbitrary T-SQL execution: https://gist.github.com/1491439
- templated T-SQL script to do database restores: https://gist.github.com/1491483

postrestoredb.rb
- utilizes functionality not covered here: svn, emailing reports, and updating my trac tickets.  Chef takes care of these for me as separate matters!
- templated batch file to wrap around arbitrary T-SQL execution: https://gist.github.com/1491439


Hope this was useful!  Remember... if you are a runner, you need a FREE http://bit.ly/smashrun account for easy run analytics!


References from above:


provision chef-managed Fedora Amazon EC2 virtual machine instances

posted Jul 10, 2011, 8:48 AM by Steve Craig   [ updated Jul 10, 2011, 10:00 AM ]

So, you've decided that the devops Chef configuration management platform is for you, and you've already setup your free Opscode "hosted chef" account, along with your local workstation and now you are ready to provision and manage Amazon EC2 linux virtual machine instances?

This how-to guide is for you.  This is your all-in-one to get you from zero to N-1 fully chef-managed Fedora Amazon EC2 instances backed with custom EBS root devices in less than 60 minutes!

There are three major dependencies that this guide will walk you through on the way towards your larger goal of spinning up an unlimited number of fully customized, fully chef-managed Amazon EC2 instances.  This guide presumes the following (major) dependencies are correctly fufilled prior to successfully creating and managing Amazon EC2 instances:



Do not proceed further until you have the first two items listed above completed!

** setup ~/.ec2 directory on knife workstation with SSH keys and authentication information (please see Robert Sosinski's EXCELLENT write-up for complete details):  I have extracted and lightly modified the relevant steps (1-12, essentially) from Robert's setup guide and posted them here.  We will be creating and connecting to EC2 instances via chef and knife, rather than the AMI tools... but we need the EC2 tools setup now to ensure success later (do this third)


2. Once signed up and into the AWS website, click the "Account” top-navigation and then select "Security Credentials" from the body of the next page (login to AWS again if necessary - WTF?).  Here, you should select the “Access Credentials” link to get your specific credentials.

4. Select the “X.509 certificates” link.

5. Click on the “Create New” link. Amazon will ask you if you are sure, say yes. Doing so will generate two files.
A PEM encoded X.509 certificate named something like cert-xxxxxxx.pem
A PEM encoded RSA private key named something like pk-xxxxxxx.pem

6. Download both of these files.
* please note: while you are here, grab the information on your "Access Key ID" as well as the "Secret Access Key" and save them locally - you will need both items to setup knife later on inside the "edit knife.rb and add your EC2 "cloud" credentials" section of the guide!

7. Download the Amazon EC2 Command-Line Tools from here: http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351&categoryID=88

8. Open the Terminal, go to your home directory, make a new ~/.ec2 directory and open it in the Finder (I use a Mac.  "Finder" is Mac specific.  Just make sure you've got the right files in the right places if you are using a different OS for your local workstation!)
$ cd
$ mkdir .ec2
$ cd .ec2
$ open .

9. Copy the certificate and private key from your download directory into your ~/.ec2 directory.

10. Unzip the Amazon EC2 Command-Line Tools, look in the new directory and move both the bin and lib directory into your ~/.ec2 directory. This directory should now have the following:
The cert-xxxxxxx.pem file
The pk-xxxxxxx.pem file
The bin directory
The lib directory

11. Now, you need to set a few environmental variables. To help yourself out in the future, you will be placing everything necessary in your ~/.bash_profile file. What this will do is automatically setup the Amazon EC2 Command-Line Tools every time you start a Terminal session. Just open ~/.bash_profile in your text editor and add the following to the end of it:
# Setup Amazon EC2 Command-Line Tools
export EC2_HOME=~/.ec2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=`ls $EC2_HOME/pk-*.pem`
export EC2_CERT=`ls $EC2_HOME/cert-*.pem`
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Home/

12. As you made some changes to your ~/.bash_profile file, you will need to reload it for everything to take effect. Run this:
$ source ~/.bash_profile

Test your work up to this point by using your new AWS command line tools to list the available AMIs (if this doesn't work, you made a mistake at some section above and should ensure success before continuing onwards!)
$ ec2-describe-images -o amazon

If the AMI describe command above worked (it will generate a TON of output about available machine images), move forward and create a keypair with a nice descriptive name for SSH access to your future EC2 instances (my keypair is called smashrunsteve - you'll see that later on in this guide):
$ ec2-add-keypair smashrunsteve

This will create a RSA Private Key and then output it to the screen. DO NOT SHARE THIS KEY!  It will permit complete "root" access to your virtual machines later!  Copy the entire key, including the -----BEGIN RSA PRIVATE KEY----- and -----END RSA PRIVATE KEY----- lines to the clipboard. Now, go into your ~/.ec2 directory, make a new file called smashrunsteve.pem (choose your own keypair name here!), open it in your text editor, paste the entire key and save it.
Correct the permissions of your keypair file so that no one can read it.  It's private!
Inside your ~/.ec2 directory:
$ chmod 600 smashrunsteve.pem


You now have your AWS account setup and your local workstation has all the necessary authentication information to communicate with the cloud via the command line!  Good work.  Take a breath: all three of the necessary pre-requisites are now complete!

Although I was hoping that a nice, off-the-shelf Amazon AMI would provide a basic linux image for my use (and there are many Ubuntu AMIs that fit the bill), I prefer the Redhat/Centos/Fedora track and the dearth of decent AMI's in that regard quickly lead me to realize I would have to create my own AMI; luckily the Fedora project has an official Fedora 14 AMI that I'll use as a base.  For detailed information on creating your own AMI (this is not light reading - stick with my guide instead!) take a look at the full on Amazon AMI-creation docs.

** Note:  the following section presumes that you have followed the steps above, have the three major dependencies worked out, and your local workstation already has chef/knife setup and working.  I suggest you break through to this guide if you do not have chef and knife setup yet on your local workstation! Our new Amazon vm will need chef so that we can deploy packages to it automatically once it is created... but the stock Fedora 14 AMI is not chef bootstrap ready...  So, let's sharpen our knife and get cooking!

First, install gem dependencies on the local workstation to extend knife's functionality for cloud managment:
$ sudo gem install net-ssh net-ssh-multi fog highline --no-rdoc --no-ri --verbose

Second, install cloud provider-specific knife plugins on local workstation (this guide is ec2-specific; however, there are other cloud provider plugins... check the opscode wiki referenced above):
$ sudo gem install knife-ec2 --no-rdoc --no-ri --verbose

Next, edit knife.rb and add your EC2 "cloud" credentials to the bottom:
# EC2:
knife[:aws_access_key_id]     = "Your AWS Access Key"
knife[:aws_secret_access_key] = "Your AWS Secret Access Key"
(if you did not save this information from earlier in the guide, simply sign into the AWS website, click the "Account” top-navigation and then select "Security Credentials" to retrieve it now)

Last, configure your local knife workstation to use the correct amazon EC2 SSH private key (that will be specified below as -S)
$ ssh-agent
$ cd ~/.ec2/
$ ssh-add ./smashrunsteve.pem
(remember, I called my keypair from the beginning "smashrunsteve" but you hopefully chose a better keypair description for yourself)

At this point, we are ready to launch an EC2 instance from the command line, but we still have to get all the necessary meta-information together for knife prior to the launch.

It is time to figure out which AMI (-I) to use.  I had hoped to use centos but numerous items blocked my path and so I jumped to Fedora instead - BRAVO FEDORA for making an official ami available!
us-east-1 i386 instance store [ ami-669f680f]
us-east-1 x86_64 instance store [ ami-e291668b]

Next, figure out which instance type you want to pay for (--flavor) ; note that m1.small requires i386 AMI.

Ensure that you have a valid security-group defined.  The easiest way to do this at the moment is via the aws web interface.  login to AWS and hit the "EC2" top-navigation, and then go to the "Security Groups" at the bottom of the left navigation.  Create a new security group (mine is called "sm-linux", I suggest you start with a group for linux and a group for windows).  These are your basic firewall rules.  For now, a basic firewall would contain TCP SSH, HTTPS and HTTP  (22 443 80) along with ICMP ALL.  Click the "inbound" tab and modify the security group rules to permit those basic ports.  0.0.0.0/0 (permit all internet) should be fine for now - if you are paranoid (why not? they are watching you, after all...) and want to add JUST your own workstation IP, then put your workstation's public IP in and use the single-host mask "/32".  Make a note of what you called your new security-group (-G).

Note the name of the aws ssh key you want to use (-S) and the instance hostname and chef node name (-N and --node-name, do yourself a favor and use FQDN)
Last, figure out which user to connect as initially (-x) : fedora requires ec2-user with sudo: WINNING!

FINALLY, put it all together and LAUNCH THE INSTANCE with the stock Fedora AMI by typing:
$ knife ec2 server create -I ami-669f680f --flavor m1.small -G sm-linux -S smashrunsteve -x ec2-user -N first-server01.domain.com --node-name first-server01.domain.com

Wait for a little while... (I think this first machine creation takes a long time because the Fedora machine image information has to be transferred from amazon storage) 

Instance ID: i-7041a111
Flavor: m1.small
Image: ami-669f680f
Availability Zone: us-east-1b
Security Groups: sm-linux
SSH Key: smashrunsteve

Waiting for server................
Public DNS Name: ec2-107-20-15-58.compute-1.amazonaws.com
Public IP Address: 107.20.15.58
Private DNS Name: domU-12-31-39-00-6A-45.compute-1.internal
Private IP Address: 10.254.109.175


HOLY COW! The EC2 instance was successfully launched!  Now your knife is trying to bootstrap chef... but unfortunately this will fail because the default knife bootstrap is built with Ubuntu in mind... it's not a big deal, we'll customize our AMI in a few short steps below. {Note, this boostrap didn't work first off with my inital centos AMI either: the default knife bootstrap has a major apt-get dependency, which is the wrong linux package management system from centos' yummy perspective and RPM chef packages are not going to happen on CENTOS: deprecated - this is one reason why I decided to back away from centos.}

The bootstrap didn't work first off with fedora; the base Fedora AMI lacks apt and other dependencies.  That's fine - the EC2 instance was properly started and we have the information we need to connect.  We will install the necessary software on this first image ourself, and then transform the virtual machines' disk partition into an "EBS" instance to save our work.  The long instructions on the opscode wiki for getting this done are located here, but I've got the short version for you.  SSH to your new EC2 instance (make sure you substitute the name of your SSH keypair and the public IP address that knife reported back when it launched the instance earlier):
$ ssh -i smashrunsteve ec2-user@107.20.15.58

Install the basic packages on your new instance to support Chef and future bootstrapping (Fedora 14 does install ruby 1.8.7 via yum, which is nice):
$ sudo yum install apt wget openssl make gcc rsync ruby ruby-devel ruby-irb ruby-rdoc ruby-ri

Install RubyGems from Source
$ cd /tmp
$ wget http://production.cf.rubygems.org/rubygems/rubygems-1.7.2.tgz
$ tar zxf rubygems-1.7.2.tgz
$ cd rubygems-1.7.2
$ sudo ruby setup.rb --no-format-executable
$ sudo gem install chef ohai --no-rdoc --no-ri --verbose

Install xfs filesystem dependencies on your Fedora AMI (the new EBS partition will be of type "xfs")
$ sudo yum install xfsprogs

Now the instance has been prepared for future use.  We'll now transform the root partition into an EBS backed volume so that you can save the work you've done thus far (and use it as a template).
** Please note: I specifically installed as few packages as possible on my temporary base Fedora instance so that it would be as basic and "streamlined" as possible prior to templating; however, if you want to install additional software (and you want all your future Fedora machines to have it, too) then this is the place to do it, BEFORE you rsync everything from the current running root partition to the new EBS volume that you will create and attach below.

Once you are satisfied with your Fedora vm instance (if you are following right along, it's ready now!) it's time create and attach a new ebs volume to your Fedora instance - use the AWS web console to create a new EBS volume and attach it to your currently running Fedora instance.

After you have created a new EBS volume and you've attached it to your running Fedora instance using the AWS web console, you'll need to create and mount the new EBS partition on the Fedora instance
$ sudo mkfs.xfs /dev/xvdf
$ sudo mkdir /mnt/ebs01
$ sudo mount /dev/xvdf /mnt/ebs01/

Now, rsync everything from your current Fedora AMI root partition over to the new ebs volume and get rid of the stuff you don't need
$ sudo rsync -a --delete --progress -x / /mnt/ebs01
$ sudo rm -fr /mnt/ebs01/proc/*
$ sudo rm -fr /mnt/ebs01/mnt/*

* THE FOLLOWING /etc/fstab EDIT IS IMPORTANT!  We need to remove the line that references the old /mnt partition (we won't need it the next time we boot), and we need to modify the line for the root / partition to use the new "xfs" filesystem type
$ sudo vi /mnt/ebs01/etc/fstab
- edit out the previous /mnt partition inside /etc/fstab
- edit the / (root) partition type to be xfs instead of ext3

Now, unmount the ebs volume from your Fedora instance so that it can be detached, snapped, and remounted to a new AMI:
$ sudo umount /mnt/ebs01/

Detach the ebs volume from the current Fedora AMI (use the AWS web console)

Take a snapshot of ebs drive (this uses the AWS tools we installed with Robert Sosinski's help earlier, ensure you substitute the name of your EBS volume from the AWS web console)
$ ec2-create-snapshot vol-a3537dc8

Check on the snapshotting status (wait until the snap is complete)
$ ec2-describe-snapshots snap-cd18ebac

Register a new AMI with the snapshot, kernel from the previous machine, and other descriptive information as necessary using the AWS tools (this is what we have been working towards this entire time!)
$ ec2-register --snapshot snap-cd18ebac \
--kernel aki-407d9529 \
--root-device-name /dev/sda1 \
--description "smashrun fedora 14 linux AMI template" \
--name linux.smashrun.com \
-a i386

The register command brings back an AMI id:
IMAGE ami-f00ff599

Terminate your first Fedora ami instance (it is now disposable; make sure you substitute your instance id!)
$ knife ec2 server delete i-a08063c1

... And start up a new Fedora instance with your brand-new, fully customized AMI and EBS volume. Be sure to use your new ids for the AMI and node names. Watch the chef bootstrap process sail right through!
$ knife ec2 server create -I ami-f00ff599 --flavor m1.small -G sm-linux -S smashrunsteve -x ec2-user -N real01.node.com --node-name real01.node.com

HOLY CRAP! This time chef successfully bootstraps!  (and will from now own - thanks to your new custom EBS volume and AMI image!)  CELEBRATE by registering on Smashrun to track your Nike++ runs and follow me on twitter!

At this point, you can apply chef roles and recipes to your new machine.  I won't cover that here.

Basic client/server Chef setup with Mac OSX, Windows and Opscode Platform

posted May 29, 2011, 8:08 AM by Steve Craig   [ updated Aug 7, 2012, 6:05 PM ]

This posting is a natural follow-on from "Why Chef configuration management? Why not puppet?" at http://bit.ly/opschef  I'd suggest you start there.  And, once you finish this basic setup, you should be ready to move on to the next guide in the series, which will show you how to go from zero to N+1 chef-managed linux Amazon EC2 instances in less than 60 minutes

Before you can jump right in and start cooking with Chef for configuration management, you have to setup your local environment.  Personally, I use Mac OSX (10.5, i386 to be precise) running vmware Fusion and a Windows 2003 Server virtual machine as my primary desktop{s}, and have found this setup to be excellent for cooking with Chef!  So, after you have made the decision to use Chef (see http://bit.ly/opschef for how I made the decision to go with Chef for configuration management), lets hits the basic steps to setup your development "kitchen".

We will largely follow the Opscode web quickstart!  There's no need to re-invent the wheel, people.  The web quickstart was created for a reason, and that reason is to get you up and running as quickly as possible while still hitting the topic highlights you will be investigating in greater detail later.

The Opscode quickstart works well; however, I have made a few clarifications and modifications.  If you'd like: follow the slightly modified steps here!  I'll provide meta-data for the main points on the quickstart guide here:

- Opscode Platform
The most powerful and flexible chef setup is client/server, and as such, requires chef-client as well as chef-server.  For those who want to get started cooking quickly with this powerful and flexible approach to configuration management and would rather not spend time setting up yet another server, grab a free chef-server account from Opscode.com (the creators of Chef, natch) on their "Opscode Platform".  As if having someone maintain uptime for your chef-server wasn't good enough, the people at opscode.com will give you a five-node chef-server for free!  Trust me, this is good enough to get started and decide if Chef is the configuration management tool for you.  This is a no-brainer; seriously.  Register for a free five node account here.  Now you can focus on setting up your chef-client configuration, rather than worrying about if chef-server is working correctly.

- Operating System
Because I am in the rather interesting situation of being a predominantly-Linux person who was wrangled into supporting a pure-Windows production environment for #smashrun , I've got a Mac OSX desktop running vmware Fusion with a Windows 2003 server virtual machine.  I'm also on the Opscode Platform.  For the purposes of this Quickstart guide, that all means for my basic install, I've got the opscode platform as my chef-server, I've got the local copy of my chef repository on my Mac, and I'll be installing chef-client primarily onto the Windows 2003 virtual machine.

- Development Tools
This is straightforward, right?  I need Xcode on my Mac as a necessary pre-requisite because of Chef's huge dependency on Ruby and Rubygems.  After Xcode, I need git (we'll be using this modern, open-source version control tool for our chef cookbooks) Ruby and Rubygems, and finally the chef gem.

Therefore, our modified order of operations (the opscode wiki quickstart has more detail if you need it) for "Assumptions: Necessary software" is:
1. instead of setting up chef-server, setup an opscode platform account here:
2. if running Mac OSX as your base development workstation OS, install Xcode:
3. install git for version control:
4. Install Ruby (already installed on most OS, use your OS package management to check)
5. Install Rubygems 1.3.7+ (already installed on most OS)
6. Install Chef


- Customizing the User Environment
Once the base software is installed from "Development Tools," it is time to hook it all together and put it to use!  Numerous items will need customization: your Opscode Platform account, your local Chef Repository, your .chef configuration directory for "knife" and finally your first chef-client cookbook!

Our modified order of operations for "Customizing the User Environment" is:
1. Customize your Opscode Platform account:
  - create your "organization" (this string is important, one word / no spaces is easiest)
  - download the organization validation key (WARNING this is a PRIVATE KEY and permits chef-client nodes to be managed via the chef-server: with it, 3rd parties could directly register a 3rd party chef-client node under your organization and then retrieve all meta-data associated with your chef installation, which is quite extensive)
  - download your organization user key (WARNING this is a PRIVATE KEY and authenticates you to the chef-server: with it, 3rd parties could masquerade as you and issue knife commands to the chef-server with your level of access)
  - download your knife configuration file (substitute YOURORGANIZATIONNAME)
2. Create a local Chef Repository (all changes are made locally, committed to version control and then uploaded to the chef-server)
3. Create a .chef directory inside your userhome directory and copy the keys and configuration from step one into it
4. Verify you are able to connect to the Opscode Platform (chef-server)!

- Setup chef-client
If you were able to connect to the Opscode Platform (chef-server) once your local User Environment was customized, it is now time to setup a new node as a chef-client!  Chef comes with some very easy methods to "bootstrap" (semi-automatic install of most required software and settings for Chef configuration management) chef-client onto *NIX nodes, and I will not go into them now.  Remember: I have the need to manage Windows chef-client nodes, and that is where I'll focus for this section.  Also, Chef v >= 0.10 comes with improved ability to bootstrap windows chef-clients.  I'll going to skip that for now as well.  Consider this next section my meta-data on the Opscode wiki section for "Chef Client Installation on Windows".

1. Install pre-requisites: ruby 1.8.7
2. Install pre-requisites: RubyDev kit
3. Install pre-requisites: extra Rubygems for Windows (win32-open3 ruby-wmi rdp-ruby-wmi win32-process win32-service windows-api windows-pr ffi )
4. Install gems: Chef and ohai
5. Configure the new node's chef "client.rb"
6. Copy your organization's validation.pem private key to the proper location on the node
7. Run Chef client!
8. Your new node should show up on the Opscode control panel "Node List":

At this point, you've got your organization up on the Opscode Platform (your chef-server), you've got your first node associated with your organization running the chef-client and successfully checking in with with the chef-server, and your local development environment is ready to accept chef "cookbook" files full of "recipe" (specifications of resources to manage) and "attribute" (values used throughout your configuration management system) declarations.

You now need to write your own cookbooks, grab some from github, or leverage cookbooks from Opscode.  I started with Doug MacEachern's windows cookbooks.  In the next post I'll show how I evolved some of Doug's basic Windows recipes into something a little more specific for my project, and hopefully post my github information so that you can see my cookbooks.

Why Chef configuration management? Why not puppet?

posted May 29, 2011, 4:07 AM by Steve Craig   [ updated May 29, 2011, 10:46 AM ]

Why Chef? (or, why not puppet, cfengine, BMC bladelogic, HP opsware, MS SCCM ... or anything else on the list @ http://bit.ly/cfmanage)

My easy answers to this important question cause pain in numerous other ways:
1. Windows
2. cashflow

"Cashflow" rules out cadillac solutions such as BMC bladelogic and HP opsware, and "windows" rules out most of the open source solutions on the list.  Why not SCCM, then... especially seeing as how I have a valid MSDN account?  Again, the two main reasons hit here: anticipated future licensing costs and the need for a solution that works with both the (overwhelmingly Windows) client-facing hosting environment as well as more traditional open-sourced infrastructure products for internal tools (nagios, request tracker, cacti, subversion, git, apache) running on *insert favorite flavor of* linux.

So the chef install proceeded with all due haste; I'd rather pay the "capital cost" in time to automate now because each Site Support minute spent performing click-click-click inside an RDP session increases costs now in the present and ensures the costs will remain into the future, as well.

There was one major issue, however:  I had no knowledge of Ruby.  Therefore, the numerous blog posts stating "a main advantage of Chef is that it is pure Ruby (rather than puppet, which uses its own language)" swirled in my mind and coalesced into a double-edged sword.  In the end, I realized there would be a learning curve to any new tool, and this seemed like an excellent opportunity to gain focused, small-scale exposure to Ruby that could easily lead to more in-depth uses and projects.

Although it seems like a paradox at first glance, I figure the risk of "time-sink" purported by the major requirement of learning a new language will most likely be mitigated with time and use.  If I struggle and am inefficient with Chef it will most likely be due to other, more basic factors than any barriers thrown up by my lack of knowledge - or inability to learn - of Ruby.

The final piece of the puzzle was the enthusiasm and activity on twitter, github and the opscode Chef confluence wiki itself surrounding #chef and #devops ...  Not to mention the larger community and social network surrounding #ruby and #agile.  Imagine my surprise to learn a tweep I've been following (for years, primarily for other updates) @lusis is active inside this #devops hashtag!

So in the end the main markers were:

- free (beer)
- free (open-source)
- flexibility (full scripting language support)
- OS-compatibility for my specific project (no vendor lock-in)
- active social network streams #devops #chef

Coming up ... I've got buy in from the rest of the #smashrun team to begin sharing some of the Chef recipes I've whipped up to begin automating Windows sysadmin tasks for http://smashrun.com  Please stay tuned, or proceed to setup your basic Chef environment!

Mac OSX with vmware: why didn't my windows cdrom/dvd mount properly?

posted May 29, 2011, 1:29 AM by Steve Craig   [ updated May 29, 2011, 1:53 AM ]

For all those ambidextrous OS users out there, here's a helpful tip.  I run Mac OSX as my base OS along with a Windows 2003 virtual machine inside of vmware Fusion.  This works like a champ, and has become my go-to setup.

The entire setup is so stable (knock on wood) that I have only experienced less then 5 complete and total system crashes requiring a hard-reboot of OSX in the 3 years that I've been running the laptop. As a result, I leave the Windows 2003 VM running all the time, and when the captain has turned off the fasten seat-belts sign, I simply close my Mac to put the entire thing into standby when I move about the cabin.

Here's the minor issue with that.  Every once in a while, a friend will give me a cdrom or dvd with some data on it, and I'll pop the disc into the drive and wait for the Mac to mount it so that I can view the pictures or other files in the Finder.

No amout of coaxing will mount the cdrom or dvd as the disc does not even show up as a device.  Here is the terminal output from a diskfree command with nothing other than my single mac hard-drive and my ipod mounted (as you can see, my HD is nearly full and so is my iPod):

ouch:~ $ df -kh
Filesystem      Size   Used  Avail Capacity  Mounted on
/dev/disk0s2   149Gi  147Gi  2.0Gi    99%    /
devfs          118Ki  118Ki    0Bi   100%    /dev
fdesc          1.0Ki  1.0Ki    0Bi   100%    /dev
map -hosts       0Bi    0Bi    0Bi   100%    /net
map auto_home    0Bi    0Bi    0Bi   100%    /home
/dev/disk1s2    15Gi   15Gi  444Mi    98%    /Volumes/iPod

Bear in mind, the cdrom or dvd that I am trying to view IS valid, and IS inside the Mac's optical drive.  Now, when I go for a listing of available disk devices, this is what comes back:

ouch:~ $ ls -tlra /dev/disk*
brw-r-----  1 root    operator   14,   2 May 12 00:16 /dev/disk0s2
brw-r-----  1 root    operator   14,   1 May 12 00:16 /dev/disk0s1
brw-r-----  1 root    operator   14,   0 May 12 00:16 /dev/disk0
brw-r-----  1 steve  operator   14,   5 May 29 04:24 /dev/disk1s2
br--r-----  1 steve  operator   14,   4 May 29 04:24 /dev/disk1s1
brw-r-----  1 steve  operator   14,   3 May 29 04:24 /dev/disk1


As you can see, I only have the primary HD (disk0 etc) and the iPod (disk1) available.  WTF, Mac?

More like - WTF, vmware Fusion!  Fusion was too smart here and is the source of my problems. Whenever I mount some USB device, vmware Fusion asks which machine to mount it to. Nifty!  But, that didn't happen with this DVD - because it was created with a windows filesystem!  Fusion saw the drive, saw it was Windows rather than Mac, and automatically (and silently) intercepted it and mounted it right up to my Windows 2003 virtual machine.

Once I shutdown the Windows 2003 virtual machine and check my disk devices again via "df -kh" and "ls -latr" inside Terminal, voila!  There is "My Disk," ready and waiting (note the previously mounted HD and iPod, along with the new 3GB DVD):

ouch:~ $ df -kh
Filesystem      Size   Used  Avail Capacity  Mounted on
/dev/disk0s2   149Gi  146Gi  2.4Gi    99%    /
devfs          119Ki  119Ki    0Bi   100%    /dev
fdesc          1.0Ki  1.0Ki    0Bi   100%    /dev
map -hosts       0Bi    0Bi    0Bi   100%    /net
map auto_home    0Bi    0Bi    0Bi   100%    /home
/dev/disk1s2    15Gi   15Gi  454Mi    98%    /Volumes/iPod
/dev/disk2     3.3Gi  3.3Gi    0Bi   100%    /Volumes/My Disk

ouch:~ $ ls -tlra /dev/disk*
brw-r-----  1 root    operator   14,   2 May 12 00:16 /dev/disk0s2
brw-r-----  1 root    operator   14,   1 May 12 00:16 /dev/disk0s1
brw-r-----  1 root    operator   14,   0 May 12 00:16 /dev/disk0
brw-r-----  1 steve  operator   14,   5 May 29 04:24 /dev/disk1s2
br--r-----  1 steve  operator   14,   4 May 29 04:24 /dev/disk1s1
brw-r-----  1 steve  operator   14,   3 May 29 04:24 /dev/disk1
br--r-----  1 steve  operator   14,   6 May 29 04:29 /dev/disk2



The point:  if you are running Mac OSX along with VMWare Fusion and a windows virtual machine and can not figure out why cdrom or dvd's are not mounted to your Mac, shutdown the VM and they will appear without issue.

google apps transition basic important information

posted May 25, 2011, 1:55 AM by Steve Craig

Please be aware.  Google Apps (our mail, calendar and docs provider) is performing a significant shift with their back-end infrastructure.  There is a tremendous amount of marketing-speak and gee-whiz inside the message below and on google's site if you look for what all this transition means; however, I will lay out the ONE single important change that is happening:

* if you have multiple google accounts, you will not be able to be logged into multiple accounts at once inside the same browser (one login-per-tab, for example)

This is the only thing that you need to be aware of.  The technical specifics behind the change are this:  currently google uses multiple browser cookies for each of your google apps accounts, so your browser is able to "be logged into" each of the different accounts at the same time.  However, in their quest to squeeze better metrics from their massive system and make more money from more precisely targeted ads (and continue to offer the Google Apps suite of services to most of us small organizations for free) they are merging all their login cookies into the one existing google.com megacookie.

Here is the precise article that describes the "sign-on" change that I am referring to:

Here is the in-depth article that describes the "how to use multiple google accounts simultaneously" for those of us who have (for example) a "standard" @google.com account as well as a "white-labeled, Google Apps" account for their own personal/professional domain:

Make no mistake: the primary benefactor of this transition is google itself.  The secondary benefactor is your organization.  The direct benefits your organization's users will experience are access to more google services via your organization's Google Apps account (Google Voice immediately jumps to mind).

This migration will happen no later than the week of June 6th.  There are numerous tools and resources available for Google Apps organizations to transition users on their own, prior to the cutoff; however, if the organizations take no action then google will transition you during the second week of June.



---------- Forwarded message ----------
From:  <apps-noreply@google.com>
Date: Wed, May 25, 2011 at 2:31 AM
Subject: Two week notice: Google Apps accounts will be automatically
transitioned
To: Organization Apps Administrator <yyyy@zzzz>


Dear Google Apps administrator,

Google Apps accounts are undergoing an improvement, allowing you to
give users access to over 60 additional applications from Google. We
encourage you to transition your organization’s accounts on your own
schedule now.

There are several advantages to transitioning on your own schedule:
Make the change on your own timeline
Have time to try the new infrastructure with a subset of your accounts first
Use automated mailing lists and email templates to pre-notify your users
Get access to over 60 additional applications from Google right away

We plan to fully transition your organization soon -- including all
users that you have not yet transitioned yourself. If you have not
transitioned by the week of June 8, 2011, we’ll complete the
transition for you.

If you have questions about this transition, we encourage you to
explore our Help Center documentation for administrators and for
end-users.

Sincerely,
The Google Apps Team

-------------
You have received this mandatory email service announcement to update
you about important changes to your Google Apps account.

Please don't reply to this email, as we won't be able to review your
response. You may file a case in your control panel if you need
additional help.

Google Inc.
1600 Amphitheatre Parkway
Mountain View, CA 94043, USA



Microsoft Kinect Voice Data Collection Opt-out

posted Mar 20, 2011, 8:56 PM by Steve Craig

I'm not sure what the default setting was for "Microsoft Voice Data Collection" setting inside the Xbox Custom Online Safety Settings screen, but I deleted the software and "blocked" it.

Microsoft says they only want the Kinect command words with ambient sound to improve recognition accuracy but I don't want any part of it.  The software itself can be deleted from the system memory management menu, also saving a few hundred megs on the HD.  This was my initial approach, and caused the lack of default setting knowledge.

1-10 of 23