Yesterday, I posted about storing passwords in MongoDb. Thanks to some feedback on G+, I changed the hashing from hashlib to bcrypt. SHA and MD5 are apparently not as secure.

Also, when I switched to bcrypt, I found an issue with my get_credentials() function. When it gets data back from MongoDb, it gets the entire array, even though I specified username. It just so happens, when I was testing with the previous version, I was using the same test password and it was hashed the same way. Bcrypt hashes the password different each time it’s called, so when I switched, the passwords were never matching up.

Here’s the updated code:

 

UPDATED HERE

It’s been a while since I’ve posted. Not being a full-time programming, I get periods of time where I’m swapped with work and don’t get to do much coding or writing. Recently, I attended the Datto Partner Conference, followed by playing catch up, followed by my CEO coming to town, which means hunkering down and planning strategy, and finally catch up again. One cool thing I learned recently about Datto is they use python for their Shadowsnap agent. Pretty cool seeing python used in products we use.

Anyways, the reason for this post is I’m working on a project that requires a database. I started it with MySQL, but then decided I should check out some of the more modern databases. That led me to looking into MongoDB. For those of you not familiar, MongoDB is not a relational database management system. It store documents which are similar to a record in sql, but documents do not have to be strictly defined and populated. For example, you may have a users database. One user could have username, full name, and email address. Another user could have username, full name, email address one, and email address two. Also, you can embed documents within documents. For example, you could embed photos with name, description, tags, etc in the users document. It’s pretty cool stuff.

One of the things I needed to do was store usernames and passwords. In MySQL, you can use the password() function to hash the password and store it. From what I’ve read, MongoDB doesn’t have this feature, so you need to do it yourself. Since I first implemented this in MySQL, I had to figure out how to take the login information from a user, hash the password to match what’s stored in the MySQL database and compare it to authenticate the user. Having already did that, I figured why not just use that same method for hashing the password to store in MongoDB. The password would be hashed like MySQL’s password() function.

Here are some sample functions in which you can setup user logins that are specific to a company. This would be for a site or application that hosts a service for multiple companies. You can have duplicate usernames, because they are associated with the company document. The company documents are unique. You could easily change these to create user documents instead and have unique user logins.

If you have any questions or comments, let me know. This is the first time I’ve messed with MongoDB and still learning Python, so I’m sure there are some stupid mistakes. Don’t hesitate to point them out.

Oh yeah. The reason I had the Mongo connection lines in multiple functions instead of at the top of the file was with the app I’m working on, these were in a separate module. I called the functions from another python module.

Here’s sample output.

mongologin

As with most of my python programs so far, this was inspired by a real need I had with my day job. We manage our clients networks utilizing Labtech’s RMM. Part of this is patch management. One of our clients needed some type of monthly report showing what patches got installed. Unfortunately, there isn’t a great canned report in Labtech to show this.

Labtech uses Crystal Reports, so I’m sure I could make a report to display this. The problem is two fold with doing it this way. One, I don’t know Crystal Reports, and two, Crystal Reports is as fast as a 28800 modem (how did we survive those?).

Since I’m learning Python and haven’t done anything with database access yet, I figured this would be the perfect place to start.

To talk to the MySQL database, I’m using mysql.connector. Here’s the run down of what this script does.

First, I wanted the ability to store the login information for the database, so I used code I wrote in the past to encrypt this into a file.  I put this code into a separate file called encryptconfig.py, which I import into the ltpaches.py file.

Also, I wanted to play around with the argparser module, so I used that to create all the command line parameters. It definitely makes it a lot simpler than the manual way I’ve done this in the past.

One last thing of note. You’ll notice the get patches function looks a little cumbersome. The reason is a record isn’t based on a patch. It’s based on an install of patches. That can be 1 patch or 100 patches. The record is one record either way. What I had to do was split the field that shows the patches installed and make each one it’s own record in the list. I also had to exclude the line that says “Updates require a reboot”.

Once I got this working, I simply ran the script with the -sl parameter to save the login details to a file. Then I created a scheduled task to run it with the “-ul passphrase” parameter.

Here’s the code from ltpatches.py.

 

Here’s the encryption.py code.

 

 

 

 

To date, I’ve only been able to write little command line programs that I needed for various tasks like getting average latency between offices. Command line utilities are great for sys admins, and they are fine when it’s just something I need personally. As soon as you want to provide a utility to a regular user, you better put a GUI in front of them, or they are going to act like you asked them to play Tank in the Matrix.

The first gui app I built was an expansion of a command line app I wrote for exporting the Exchange Global Address List. The command line app exported it to CSV, which was all I needed for my purposes. I wrote it so I could email an updated email list for a client of mine to their offices in Shanghai, which has it’s own email system.

Having been asked about how to print out the GAL many times over the years, I decided to write a gui app for regular users and give them the ability to pick the fields they wanted. Here’s what the app looks like.

 

So how did I learn to write this? First I watched a couple basic Youtube videos and skimmed through this PyQt4 tutorial. The tutorial was great to give me an idea of how the code functions. It gave me a place to start, although I still felt like I was missing a lot. The best way for me to learn once I have a little understanding is to just jump in and get at it.

Now, I know some people might not like this, but I learned the fastest by jumping into QT Designer. After poking around, I quickly figured out how to draw up my screens.  OK, I have the screen designed. Now what?

I got stuck here for a second until I found out how to convert the file you create with QT Designer into a python file. You do that with the “pyuic4″ command. Here is the command I used.

“pyuic4 -x -o pythonfilename.py qtdesignerfilename.ui”

The -x will put a main function in at the end, so you can run the script and get your gui. This isn’t necessary with the way you will eventually want to write your app. More on that later. The -o is to specify your output file name and the last file name is the QT Designer file.

Wow, I can launch the gui now, but guess what? I doesn’t do anything. How do I put code in to make it work. I remember when I learned a little VB back in school. It was simple in VB. Just double click the object and start typing your code. Not so with with Python. Well, back to Youtube in search of how to make buttons, checkboxes, etc work.

After watching a couple videos, I found that you need to put something similar to the following line of code into the function you have that sets up your ui.

Self is referring to the class itself. btnClose is referring to the close button I drew on the gui. Clicked is what action is being taken on the button and connect is what calls the action (function) you want to take. In this case, it’s going to call a function called buttonClicked. Now all you have to do is put the code in the buttonClicked function. I basically used this same line for every button, and then my buttonClicked function checks to see which button was clicked.

In this example, lets say you click the Close button. That calls the buttonClicked function, which will have something similar to the following if statement:

This gets the text of the button and compares it to see if it equals “Close”. If so, the program exits. Seems pretty simple right.

Once you see how to do something for one object, it’s easier to figure out the other ones. Two invaluable resources are the list of PyQt classes and the list of PySide classes. Using these two sites, you can pretty much figure out how to change objects, how to read their properties, etc.

One example of reading objects properties would be the check boxes for the fields in the above screen. All I did (not sure it’s the best way) was check the value of isChecked for each of the checkboxes when the Export button was clicked. Using if statements to see if they were checked, I built a list of the fields the user requested. Here’s the code.

I’m sure you get the idea. Now,let’s get back to what I was saying about not needing the execution code at the bottom of the python file. You do not necessarily need that other than to test out what the gui looks like. When you actually write your app, you’ll want most of your logic in a separate python file. To get the gui, you’ll import the python file you made with pyuic4. The reason you’ll want to do this is in case you want to change your gui. If you change your gui and run pyuic4 again, you are going to lose all your code. By keeping your logic code in a separate file, you can change the gui and run pyuic4 until your hearts content.

One last thing you are probably wondering is what’s the difference betwen PyQt and PySide. From what I understand, it comes down to licensing. I don’t think I’m knowledgeable enough to talk about functionality. PySide has a much more flexible licensing model. The nice thing is they are very much a like. If you write your app in PyQt and want to change, simply change your import statements, and you should be able to get almost everything to work. I had a few exceptions.

On the second app I wrote, I had password fields. Changing the echo for the password field is different between the two, but not significantly. You just have to hit those two sites I listed above, and you can figure out the differences. I’ll be writing more about the second app here shortly and posting code. You’ll get to see PySide in action.

As always, let me know what you think. I learn a lot by just jumping in, but I’m sure I do a lot of bad things as well.

 

This past week I’ve been working on a python script to gather the used, free, and total disk space from a bunch of Windows servers. I’ve had to do this manually many times over the years for various planning tasks. This most recent time a client of ours has eaten up their SAN storage in less than a year, so I wanted to see what servers are wasting a lot of SAN storage. To figure this out, I was going to look at servers with large volumes that do not have a lot of data.

I started writing a script that uses WMI to connect to the servers and collect the information. Then I thought it would be cool to have it saved to a Google spreadsheet. After figuring out how to do that, I then wanted a way to run this on a regular basis, which requires storing a Windows login and a Google login. Obviously, you don’t want that stored in plain text or in any manner that allows someone to get your passwords. I started Googling and found a post on Pycrypto. I’m sure there are better ways to do this, but here is what I came up with.

When running the script with the configuration option, it asks you for your login information for both the servers and Google. It then puts this into XML using ElementTree. I then use PyCrypto to encrypt the XML using ElementTree’s tostring function. Lastly, I use pickle to dump the encrypted data and the IV (initialization vector) used to encrypt the string to a file.

Here’s what the code looks like.

 

Once the data is saved, you then have to be able to get it back out of the file. Here is the code to do that.

 

Let me know what you think or if there is a better way. I’m sure there is.

A client of mine is looking to give access to their ERP application to their office and plant in Shanghai. They are going to do this via Citrix, so I wanted to see what latency was like. To do this, I just setup a batch file that runs pings to both locations and outputs the results to a text file. I scheduled this to run every 4 hours. Here’s the batch file.

After running for about a week, I wrote a small python script to grab all the ping times out of the text files and give me the maximum, minimum, and average response times. You are prompted for the location of the text files and the beginning pattern, so you can get the results for each site.

Here’s the output for one location:

I know this is nothing special, but I figured I’d throw it out there in case any other newbies or sys admins need to get this information quickly without software.

I’ve been working on a script to speed up our failover to the cloud testing as I wrote about in this previous blog here. Unfortunately, I haven’t been able to dedicate time to this, so I’ve been working on it here and there. I’m pretty close to completing it, at least to do what I need it to do given the skill level I’m at.

To touch on what I was trying to accomplish again, I’ll do a quick run down of the problem I was trying to solve. We currently put in data recovery solutions that take images of the servers we’re protecting. These images are then replicated offsite to our partner. When we need to recover offsite, we have the ability to virtualize any of the images transferred offsite. To make sure everything is working, we do regular tests.

When these test are performed, we choose the instance to virtualize and we create a network to virtualize them on. Unfortunately, the way this failover works, well the way virtualization works, is a new NIC is created. When a new NIC is created the IP configuration you had on all your servers is lost. Instead, they get IPs via DHCP from the network you setup, which unfortunately doesn’t give you any options other than network address, subnet mask, and gateway. This leads to a problem where none of the servers can contact active directory, and when Windows servers can’t contact AD, they can take a long time to boot and an even longer time to login.

Another problem we have is we have agents running on the servers being protected, because we monitor them as well. When these test failover servers boot up, we start getting calls about servers hard booting. This is because the live and test server have the same unique ID in the agent, and they are both reporting back as the same server.

To solve these problems, I wanted to write a script that stops and disables the services that we don’t want running during test failover. I also wanted the script to assign a designated IP configuration so the servers could find the domain controllers.

Here’s what I came up with so far. I have it running in Scheduled Tasks on Windows Startup. Because a new NIC has to be installed during boot up, I built a delay in to give it enough time to complete.

This isn’t working flawlessly yet, but I wanted to put it out there and see if anyone had some feedback or better ideas. Two of the problems I’m having are as follows:

1. The script isn’t working consistently. This may be related to execution time. I’m considering changing it to a service, and then possibly I can do some type of pause and loop to confirm the NIC fully loaded. Some servers seem to work like I expect, and some seem to only work after I reboot them a second time.

2. Not all the services are stopping and being disabled. I can’t understand why, since it works for almost all the services. Sometimes the service is disabled but running, which is why I put a reboot in as the last action. Sometimes, a service will be stopped but not set to disabled, which means it will be running after the reboot.

Testing this is a pain. Everything seems to work when I have it on a test machine and run it manually. It even seems to work when I schedule it. The problem is the startup doesn’t have the same process it does when you are going from a development server to the failed over virtual server. To test it in the correct scenario, I have to update the script, copy it to the live server, and then wait for the live server to back up and replicate offsite. That can take a decent amount of time.

Anyway, let me know if you see any major amateur mistakes or better ways to do something.

 

 

Last week, I was in Business Continuity training in Philadelphia with DRII.org. We were discussing RTOs, recovery time objectives, and it got me thinking about the backup/disaster recovery service we offer.

Our backup solution utilizes Storagecraft’s Shadowprotect to take snapshots of the server on a regular basis, typically hourly. Once a day, a snapshot if sent offsite to bi-coastal datacenters. In the event of a disaster, our clients can have their servers virtualized in the cloud and access them via a VPN or via Citrix if that client has Citrix servers also being backed up.

The reason this came to mind while I was in training about business continuity, where we really were not discussing technology at all, is although we know we are able to recover the servers in the cloud from testing them, we really haven’t had a consistent experience in bringing those servers online. There are several obstacles that present themselves when doing fail over testing. Some of these only happen during testing, but in a live event, wouldn’t matter.

First, we have management agents installed on each server for monitoring, maintenance, and support. These agents have unique ids associated with them. If we do a fail over test that has internet access, both the live and the test fail over server have the same unique ids leading to a ton of false alarms and confusion. The second problem is when the servers come up in the virtual environment, they have a new NIC. This NIC doesn’t have the same configuration it had in the live environment. It’s assigned an IP by the network you configured during the failover, which only gives you the options for network, subnet mask, and gateway. This creates the problem of none of the servers being able to find the domain controllers. You may quickly get the server booted up virtually, but try logging in and it could take quite some time. Then after you login, you  need to reconfigure the network on the DCs followed by all the other servers and reboot.

So I’m sitting in the class thinking about how to fix this, so when we do a test failover, it’s quick and predictable. Here’s what I’m working on to resolve this. Let me know if you see any flaws or have any advice on the best way to code this.

First, I’m going to have to create a python script that runs as a service, so that it runs before anyone ever logs in. In order for this to run on the servers we fail over to, it needs to run on the live servers as well. This is where I have to be careful. As far as what the script is going to do, here’s what I have figured out so far.

1. It’s going to check the IP address to see if it’s in the range I’m setting up for the failover test network. This network will not be the same network as any network at any of our clients. If it determines it’s part of the network, it moves on to the rest of the script. If not, it exits.

2. The script goes through all the agent services that we don’t want running and disables them.

3. After disabling the services, it goes through those services and stops them.

4. The last configuration change is the script will set the IP address to a predetermined IP address. These settings will be planned out before hand and saved to a configuration file on the live server. When the server is virtualized from a recent backup, the configuration file will be there.

5. Lastly, the script will reboot the server to make sure it refreshes communication with the domain controllers. When the service starts again after the reboot, it will have to check the configuration file to see if the IP has already been set. If so, it will exit. (Just thought of this as I was typing this up)

This should save us and clients a ton of setup time for failover testing and let us have a more predictable RTO in a live scenario.

So far I’m using the winreg module and the win32serviceutil module. I haven’t got to the IP configuration part yet. Once I get this coded up, I’ll put another post out with the code. If you have any input now or recommendations, let me know.

28. May 2013 · 2 comments · Categories: Code · Tags: , , , ,

I wrote this script a little while ago, but I wanted to re-write it so I could share it. Originally, I had our API key, email addresses, and smtp server address hard coded into the script. I obviously didn’t want to share that, and I didn’t want anyone to have to open the script and find and edit text. This led me to figure out how to save to XML, something I haven’t done yet as a beginner.

To give a little background of why I wrote this script, let me start by saying how lazy and forgetful I can be. We are a partner of Datto who we use for our backup solutions. Without getting into too much detail of the backups, it is a managed service we provide, which means we have to monitor the backups for our clients and resolve any backup issues. To check the backups of all our clients, we simply login to Datto’s partner portal and drill down into each appliance to check the statuses. There are two problems that I already highlighted. One, I can be pretty lazy, so logging in and drilling down into each appliance is a pain in the butt for me. Two, I can be forgetful, so depending on myself to remember to login and check all these servers when I happen to walk into fire fighting first thing in the morning is not the most reliable way to make sure backups get checked. This is where the script comes in thanks to Datto’s XML API.

Using this script, I can pull the backup statuses for all the servers and have them formatted in a nice email that I know I will read. I have this script running first thing in the morning and at lunch time, as our backups are hourly and I want to make sure there haven’t been any servers without backups for more than a few hours.

The way the script works is you run the script with a -config option to generate the XML file it will use to store is configuration. It will ask you for your API key, email subject, from address, to address, and SMTP server address. After the file is generated, you simply run the script itself without any options. It will grab the info from that file, grab the info from Datto, generate the email, and send it to you in a tabular format similar to the following:

CLIENTA:
Server Name Status Last Snapshot
CLIENTA-SRV1 Success 2013-05-28 11:03:19
CLIENTB:
Server Name Status Last Snapshot
CLIENTB-SRV1 Success 2013-05-28 11:10:05
CLIENTB-SRV2 Success 2013-05-28 11:08:37
CLIENTB-SRV3 Success 2013-05-28 11:08:39

Schedule this with cron on Linux or Task Scheduler on Windows, and you can save yourself the time of logging into Datto’s website and drilling down into each appliance.

Eventually, I plan on adding to this script to update a custom field in our RMM platform. The field will be something like last backup or maybe two fields, last successful backup and last backup status. Then if the time since the last successful backup gets too far out, I can have our RMM generate a support ticket. Then I won’t even have to look at the email. Lazy or efficient?

There are a couple other things I may change or just try with future scripts. After writing this, I read about the python module OptParser, which seems like a much better way to handle command line options for your script than the way I’ve been doing them. Also, I’m thinking about changing the configuration settings from XLM to using the Pickle module. It seems much simpler. I would have played with those changes before posting this, but I’m getting ready to head off to training on Business Continuity in Philadelphia and won’t have time.

Oh yeah, I wasted a lot of time trying to figure out how to make this script run no matter if you are on Python 2.7 or Python 3. After putzing around, I found the only thing I needed to do was add the following lines. It worked like a champ.

As always, if you have any recommendations, let me know. Here’s the code.

Download script

 

18. May 2013 · 4 comments · Categories: Code · Tags: ,

Below is a script that I wrote to address a problem we were having at our clients. Most of our clients have their email hosted on Appriver. They are hosted with Exchange 2010. All the sudden out of the blue, users start reporting that Outlook is not connecting to the servers or if it does connect, it’s not very long before it disconnects. We contacted Appriver to see if they were having issues. It turns our Microsoft put a patch out that was causing XP machines to have issues connecting to the Exchange 2010 farm. The workaround was to put an entry into the host file for the front end server.

So now that I had the fix, did I want to connect to every XP machine and edit the host file. Hell no. To quickly fix this, I wrote a down and dirty version of this script with the host entry statically in the script. It was 4 lines, simply opening the host file, writing the line and closing the file. I then used our RMM, Labtech, to create a script that would run this on XP machines. We ran it and within seconds of it running on the computers, email was working again.

Obviously, it would be bad if this script ran repeatedly on the same computer as it would put duplicate host entries. Now that the fire was out, I wanted to write a version that I could use repeatedly in the future if needed by just passing an IP address and hostname to the script. Unlike my quick fix, the reusable script would have to have some checks built in. It would have to check to make sure the host wasn’t already in the host file, and it would have to make sure the IP address and the hostname is valid.

Here’s what I have so far. I’d like to add the ability to delete an entry and to update an entry as well. In order to run this on Linux, you must run sudo, su, etc. On Windows, you’ll want to run as administrator. Luckily, we’re able to do that via our RMM platform.

Let me know your thoughts. I’m sure there are many ways to improve this, and I’m sure there are other ways to do it.