Yesterday, I posted about storing passwords in MongoDb. Thanks to some feedback on G+, I changed the hashing from hashlib to bcrypt. SHA and MD5 are apparently not as secure.

Also, when I switched to bcrypt, I found an issue with my get_credentials() function. When it gets data back from MongoDb, it gets the entire array, even though I specified username. It just so happens, when I was testing with the previous version, I was using the same test password and it was hashed the same way. Bcrypt hashes the password different each time it’s called, so when I switched, the passwords were never matching up.

Here’s the updated code:



It’s been a while since I’ve posted. Not being a full-time programming, I get periods of time where I’m swapped with work and don’t get to do much coding or writing. Recently, I attended the Datto Partner Conference, followed by playing catch up, followed by my CEO coming to town, which means hunkering down and planning strategy, and finally catch up again. One cool thing I learned recently about Datto is they use python for their Shadowsnap agent. Pretty cool seeing python used in products we use.

Anyways, the reason for this post is I’m working on a project that requires a database. I started it with MySQL, but then decided I should check out some of the more modern databases. That led me to looking into MongoDB. For those of you not familiar, MongoDB is not a relational database management system. It store documents which are similar to a record in sql, but documents do not have to be strictly defined and populated. For example, you may have a users database. One user could have username, full name, and email address. Another user could have username, full name, email address one, and email address two. Also, you can embed documents within documents. For example, you could embed photos with name, description, tags, etc in the users document. It’s pretty cool stuff.

One of the things I needed to do was store usernames and passwords. In MySQL, you can use the password() function to hash the password and store it. From what I’ve read, MongoDB doesn’t have this feature, so you need to do it yourself. Since I first implemented this in MySQL, I had to figure out how to take the login information from a user, hash the password to match what’s stored in the MySQL database and compare it to authenticate the user. Having already did that, I figured why not just use that same method for hashing the password to store in MongoDB. The password would be hashed like MySQL’s password() function.

Here are some sample functions in which you can setup user logins that are specific to a company. This would be for a site or application that hosts a service for multiple companies. You can have duplicate usernames, because they are associated with the company document. The company documents are unique. You could easily change these to create user documents instead and have unique user logins.

If you have any questions or comments, let me know. This is the first time I’ve messed with MongoDB and still learning Python, so I’m sure there are some stupid mistakes. Don’t hesitate to point them out.

Oh yeah. The reason I had the Mongo connection lines in multiple functions instead of at the top of the file was with the app I’m working on, these were in a separate module. I called the functions from another python module.

Here’s sample output.


As with most of my python programs so far, this was inspired by a real need I had with my day job. We manage our clients networks utilizing Labtech’s RMM. Part of this is patch management. One of our clients needed some type of monthly report showing what patches got installed. Unfortunately, there isn’t a great canned report in Labtech to show this.

Labtech uses Crystal Reports, so I’m sure I could make a report to display this. The problem is two fold with doing it this way. One, I don’t know Crystal Reports, and two, Crystal Reports is as fast as a 28800 modem (how did we survive those?).

Since I’m learning Python and haven’t done anything with database access yet, I figured this would be the perfect place to start.

To talk to the MySQL database, I’m using mysql.connector. Here’s the run down of what this script does.

First, I wanted the ability to store the login information for the database, so I used code I wrote in the past to encrypt this into a file.  I put this code into a separate file called, which I import into the file.

Also, I wanted to play around with the argparser module, so I used that to create all the command line parameters. It definitely makes it a lot simpler than the manual way I’ve done this in the past.

One last thing of note. You’ll notice the get patches function looks a little cumbersome. The reason is a record isn’t based on a patch. It’s based on an install of patches. That can be 1 patch or 100 patches. The record is one record either way. What I had to do was split the field that shows the patches installed and make each one it’s own record in the list. I also had to exclude the line that says “Updates require a reboot”.

Once I got this working, I simply ran the script with the -sl parameter to save the login details to a file. Then I created a scheduled task to run it with the “-ul passphrase” parameter.

Here’s the code from


Here’s the code.





This past week I’ve been working on a python script to gather the used, free, and total disk space from a bunch of Windows servers. I’ve had to do this manually many times over the years for various planning tasks. This most recent time a client of ours has eaten up their SAN storage in less than a year, so I wanted to see what servers are wasting a lot of SAN storage. To figure this out, I was going to look at servers with large volumes that do not have a lot of data.

I started writing a script that uses WMI to connect to the servers and collect the information. Then I thought it would be cool to have it saved to a Google spreadsheet. After figuring out how to do that, I then wanted a way to run this on a regular basis, which requires storing a Windows login and a Google login. Obviously, you don’t want that stored in plain text or in any manner that allows someone to get your passwords. I started Googling and found a post on Pycrypto. I’m sure there are better ways to do this, but here is what I came up with.

When running the script with the configuration option, it asks you for your login information for both the servers and Google. It then puts this into XML using ElementTree. I then use PyCrypto to encrypt the XML using ElementTree’s tostring function. Lastly, I use pickle to dump the encrypted data and the IV (initialization vector) used to encrypt the string to a file.

Here’s what the code looks like.


Once the data is saved, you then have to be able to get it back out of the file. Here is the code to do that.


Let me know what you think or if there is a better way. I’m sure there is.

A client of mine is looking to give access to their ERP application to their office and plant in Shanghai. They are going to do this via Citrix, so I wanted to see what latency was like. To do this, I just setup a batch file that runs pings to both locations and outputs the results to a text file. I scheduled this to run every 4 hours. Here’s the batch file.

After running for about a week, I wrote a small python script to grab all the ping times out of the text files and give me the maximum, minimum, and average response times. You are prompted for the location of the text files and the beginning pattern, so you can get the results for each site.

Here’s the output for one location:

I know this is nothing special, but I figured I’d throw it out there in case any other newbies or sys admins need to get this information quickly without software.

I’ve been working on a script to speed up our failover to the cloud testing as I wrote about in this previous blog here. Unfortunately, I haven’t been able to dedicate time to this, so I’ve been working on it here and there. I’m pretty close to completing it, at least to do what I need it to do given the skill level I’m at.

To touch on what I was trying to accomplish again, I’ll do a quick run down of the problem I was trying to solve. We currently put in data recovery solutions that take images of the servers we’re protecting. These images are then replicated offsite to our partner. When we need to recover offsite, we have the ability to virtualize any of the images transferred offsite. To make sure everything is working, we do regular tests.

When these test are performed, we choose the instance to virtualize and we create a network to virtualize them on. Unfortunately, the way this failover works, well the way virtualization works, is a new NIC is created. When a new NIC is created the IP configuration you had on all your servers is lost. Instead, they get IPs via DHCP from the network you setup, which unfortunately doesn’t give you any options other than network address, subnet mask, and gateway. This leads to a problem where none of the servers can contact active directory, and when Windows servers can’t contact AD, they can take a long time to boot and an even longer time to login.

Another problem we have is we have agents running on the servers being protected, because we monitor them as well. When these test failover servers boot up, we start getting calls about servers hard booting. This is because the live and test server have the same unique ID in the agent, and they are both reporting back as the same server.

To solve these problems, I wanted to write a script that stops and disables the services that we don’t want running during test failover. I also wanted the script to assign a designated IP configuration so the servers could find the domain controllers.

Here’s what I came up with so far. I have it running in Scheduled Tasks on Windows Startup. Because a new NIC has to be installed during boot up, I built a delay in to give it enough time to complete.

This isn’t working flawlessly yet, but I wanted to put it out there and see if anyone had some feedback or better ideas. Two of the problems I’m having are as follows:

1. The script isn’t working consistently. This may be related to execution time. I’m considering changing it to a service, and then possibly I can do some type of pause and loop to confirm the NIC fully loaded. Some servers seem to work like I expect, and some seem to only work after I reboot them a second time.

2. Not all the services are stopping and being disabled. I can’t understand why, since it works for almost all the services. Sometimes the service is disabled but running, which is why I put a reboot in as the last action. Sometimes, a service will be stopped but not set to disabled, which means it will be running after the reboot.

Testing this is a pain. Everything seems to work when I have it on a test machine and run it manually. It even seems to work when I schedule it. The problem is the startup doesn’t have the same process it does when you are going from a development server to the failed over virtual server. To test it in the correct scenario, I have to update the script, copy it to the live server, and then wait for the live server to back up and replicate offsite. That can take a decent amount of time.

Anyway, let me know if you see any major amateur mistakes or better ways to do something.



28. May 2013 · 3 comments · Categories: Code · Tags: , , , ,

I wrote this script a little while ago, but I wanted to re-write it so I could share it. Originally, I had our API key, email addresses, and smtp server address hard coded into the script. I obviously didn’t want to share that, and I didn’t want anyone to have to open the script and find and edit text. This led me to figure out how to save to XML, something I haven’t done yet as a beginner.

To give a little background of why I wrote this script, let me start by saying how lazy and forgetful I can be. We are a partner of Datto who we use for our backup solutions. Without getting into too much detail of the backups, it is a managed service we provide, which means we have to monitor the backups for our clients and resolve any backup issues. To check the backups of all our clients, we simply login to Datto’s partner portal and drill down into each appliance to check the statuses. There are two problems that I already highlighted. One, I can be pretty lazy, so logging in and drilling down into each appliance is a pain in the butt for me. Two, I can be forgetful, so depending on myself to remember to login and check all these servers when I happen to walk into fire fighting first thing in the morning is not the most reliable way to make sure backups get checked. This is where the script comes in thanks to Datto’s XML API.

Using this script, I can pull the backup statuses for all the servers and have them formatted in a nice email that I know I will read. I have this script running first thing in the morning and at lunch time, as our backups are hourly and I want to make sure there haven’t been any servers without backups for more than a few hours.

The way the script works is you run the script with a -config option to generate the XML file it will use to store is configuration. It will ask you for your API key, email subject, from address, to address, and SMTP server address. After the file is generated, you simply run the script itself without any options. It will grab the info from that file, grab the info from Datto, generate the email, and send it to you in a tabular format similar to the following:

Server Name Status Last Snapshot
CLIENTA-SRV1 Success 2013-05-28 11:03:19
Server Name Status Last Snapshot
CLIENTB-SRV1 Success 2013-05-28 11:10:05
CLIENTB-SRV2 Success 2013-05-28 11:08:37
CLIENTB-SRV3 Success 2013-05-28 11:08:39

Schedule this with cron on Linux or Task Scheduler on Windows, and you can save yourself the time of logging into Datto’s website and drilling down into each appliance.

Eventually, I plan on adding to this script to update a custom field in our RMM platform. The field will be something like last backup or maybe two fields, last successful backup and last backup status. Then if the time since the last successful backup gets too far out, I can have our RMM generate a support ticket. Then I won’t even have to look at the email. Lazy or efficient?

There are a couple other things I may change or just try with future scripts. After writing this, I read about the python module OptParser, which seems like a much better way to handle command line options for your script than the way I’ve been doing them. Also, I’m thinking about changing the configuration settings from XLM to using the Pickle module. It seems much simpler. I would have played with those changes before posting this, but I’m getting ready to head off to training on Business Continuity in Philadelphia and won’t have time.

Oh yeah, I wasted a lot of time trying to figure out how to make this script run no matter if you are on Python 2.7 or Python 3. After putzing around, I found the only thing I needed to do was add the following lines. It worked like a champ.

As always, if you have any recommendations, let me know. Here’s the code.

Download script


18. May 2013 · 4 comments · Categories: Code · Tags: ,

Below is a script that I wrote to address a problem we were having at our clients. Most of our clients have their email hosted on Appriver. They are hosted with Exchange 2010. All the sudden out of the blue, users start reporting that Outlook is not connecting to the servers or if it does connect, it’s not very long before it disconnects. We contacted Appriver to see if they were having issues. It turns our Microsoft put a patch out that was causing XP machines to have issues connecting to the Exchange 2010 farm. The workaround was to put an entry into the host file for the front end server.

So now that I had the fix, did I want to connect to every XP machine and edit the host file. Hell no. To quickly fix this, I wrote a down and dirty version of this script with the host entry statically in the script. It was 4 lines, simply opening the host file, writing the line and closing the file. I then used our RMM, Labtech, to create a script that would run this on XP machines. We ran it and within seconds of it running on the computers, email was working again.

Obviously, it would be bad if this script ran repeatedly on the same computer as it would put duplicate host entries. Now that the fire was out, I wanted to write a version that I could use repeatedly in the future if needed by just passing an IP address and hostname to the script. Unlike my quick fix, the reusable script would have to have some checks built in. It would have to check to make sure the host wasn’t already in the host file, and it would have to make sure the IP address and the hostname is valid.

Here’s what I have so far. I’d like to add the ability to delete an entry and to update an entry as well. In order to run this on Linux, you must run sudo, su, etc. On Windows, you’ll want to run as administrator. Luckily, we’re able to do that via our RMM platform.

Let me know your thoughts. I’m sure there are many ways to improve this, and I’m sure there are other ways to do it.


For those of you not following this mess of me learning to program in Python, this is the third option so far for getting Dell warranty expirations via web scraping. The first option I posted was the one I did without any direction from anyone who knows what they are doing. I used a string function. You can read that post here. After I posted that version on Google+ and Reddit, I got recommendations to do this with regex, Scrapy and BeautifulSoup. My last post was getting the expiration date via regex. This post is getting it with BeautifulSoup, which I must say once I figured out how to do what I wanted was much better.

Here’s a quick run down of how I’m doing this. Again, I’m sure some of this could be done much better.

The modules I use are sys for getting the command line arguments, requests to pull the data from, and lastly BeautifulSoup to parse the html. The function is only a few lines. First, I pull the html from Dell followed by parsing it with BeautifulSoup. Next, I find all the TopTwoWarrantyListItems and assign to the variable lis. Lastly, I compare those list items to pull out the max value which is assigned and returned as the warranty expiration date.

Let me know what you think, good and bad. Every time I post one of these, I get some new advice that helps me learn.



I got a decent amount of feed back and advice on my post the other day about getting a Dell warranty expiration with web scraping. It was recommended to change my scraping to use regex, Beautiful Soup or Scrapy. I figured I’d do all three and make a post on each one.

As you know if you read any of this blog so far, I’m just learning python, so I have a ton to learn. What better way than try the different options presented to me.

The first option I decided to try since I already did a little bit of it for other scripts I haven’t blogged about yet is scraping it via regex. This was quite challenging for a noob like me. I couldn’t seem to quite get the expression down to grab all the dates needed. The original expression I was using would grab the last date, but would skip right over the first one. I have no clue why.

One thing I learned from doing this scrap with regex is my original script was wrong. It was grabbing the date, but it wasn’t necessarily grabbing the correct date. Dell’s website can have multiple expiration dates. If you renew, it’s going to show the original warranty, and the old warranty. If you have a default warranty and upgraded to a better warranty, it’s going to show both. By using regex, I was able to grab the dates and compare them to find the correct expiration.

Another thing I learned about this task in particular is the slowness is not so much my code but Dell’s crappy website. As a network/systems guy, I have to go on Dell’s site a lot, and it is horribly painful to use because of the speed.

OK, so here’s a quick run down of the code and then the actual code.

First it grabs the url as a string. Then it performs a regular expression search looking for the dates and creates a list of tuples with the date being the second item in each tuple. After having a list of tuples, I have a while loop that runs through the tuples and grabs the dates out as integers and puts them into a list of tuples so they can be compared. After I have the dates in a list of tuples, I just use the max function to find out which is the correct date. I’m not sure this is the greatest way to do this, but it seems to work on the service tags I’ve tested out. Lastly, I convert the warranty back into a string to return the warranty as a string.

As I said with my original post, I’m sure this could be improved a million ways. I’m just learning, so any pointers would be appreciated.