Summer K. Rankin

View Original

YMCA web scraping: what the village people have to do with obesity (Part1)

Github of code.

I have been doing some thinking about the factors that contribute to obesity in the US.  CDC

One of the issues for people who struggle with obesity in low-income areas is access to a safe place to exercise.  For example, as someone who lives in Baltimore, I can certainly see why you wouldn't just go running if you don't know the area. It can make you an easy target for someone who is looking to get your phone/cash or just make trouble. I was thinking about what the least expensive options are for people who want to go to the gym, pool or basketball court and I immediately thought of the good old YMCA!  My family have always been members of the Y. It's where I learned to swim, where I did step aerobics in the 80s, and ran on the track with my Dad when it was cold outside. It also happens to be the most cost effective option around town. The Y was always a good place to build community and get some  exercise. I even went to a lock-in on New Year's with my friends.  Much love to the Pat Jones YMCA

I decided to look at the data on obesity in the US and whether or not it has a relationship with the locations of YMCAs. i.e., Are there more YMCAs in areas with lower obesity?  In order to do this I needed the locations of all YMCAs, but the only way to get it was to enter in all the states, cities, or zip codes to this page (**as of Oct. 5th, 2017**)

Find Your Y

List of Locations

When you enter the state, it only shows the 20 closest locations and I wanted them all, so the most systematic way to get all the locations was to enter the zip codes. Thanks to selenium, this can be automated once you obtain a list of zip codes. I used selenium with python 3 and wrote a little function that allowed me to feed it a list of zip codes (I separated them into states), and it automates the process of entering the zip code, clicking 'go' and pulling all of the html from the next page.  This page (upper right) is comprised of a series of tables that list the name, address, city, state, zip, phone, etc...  The code for it is below.


First, import a bunch of libraries.

See this content in the original post

Then, we make a little function to unpickle (format for storing pandas database files).

See this content in the original post

Now, we will make a very long function, but it will be worth it. Don't worry, I'll explain it in pieces. 

See this content in the original post

Iterate through a loop based on the zipcode

open up chrome remotely with chromedriver. be sure you have it installed. The process is simple. 

See this content in the original post

Now, we enter the url of the page we want to go to

See this content in the original post

When you look at the 'source' code for this website, you can find out what the 'element ID' is of the box where we need to enter our zip code. This is the word that you put in. For us, it was 'address' .  

See this content in the original post

The html was parsed using BeautifulSoup4 in Python 3. Create a variable called soup2 (soup1 was lost in a terrible accident. don't ask) and fill it up with all the HTML from our current page (the one that lists our 20 locations).

See this content in the original post

html from 'find your Y' page

I figured out that each location had this unique style tag, so that's what I searched in order to pull the name, address, city, state, zip. This gets entered into our 'find_all' call.

See this content in the original post

This takes each item and gets the text associated with the first item 'a' which is a web address, the text associated with it is the name of that YMCA.  That is what we save in the name variable. See all those 'br' tags? The 'item.next_sibling.next_sibling' method proved to be my best friend!

See this content in the original post

now, we name another loop that checks to see if the location has been stored already, and if not, it stores the variables and separates that big 'nn' variable of city/state/zip. 

See this content in the original post

Parsing HTML with beautifulsoup4

Last thing to do is close the web driver !!!!! Don't forget this step. Drop the Null rows, and add a location row.  

See this content in the original post

I scraped one state at a time, and some states have a LOT more zipcodes than others. My average time was about 3 zip codes per min. which meant a few nights of letting my computer stay on all night to work while I snore.

Please see my other posts about initial data cleaning and analysis.