Telescope and JSON

Ruby Bautista
3 min readNov 8, 2020

This week, we were introduced to the project our predecessors of the course established; telescope. Telescope aggregates all of the students of DPS909s’ blogs into a single site. This is particularly helpful as many students choose different platforms for their blogs and some of those platforms such as Medium have a limit on how many of their articles you can access a day. Telescope helps overcome this hurdle and makes it much more convenient to read my fellow students’ blogs. We are after all quite used to mindlessly scrolling through social media, scrolling through Telescope is a bit more enriching than a typical scroll through Instagram and the like(at least my Instagram feed which mostly consists of food, memes, and shenanigans of those particular few).

The lab for this week was to set up the telescope project on our own machines and use the http://localhost:3000/posts running the API to retrieve the JSON for the most recent 10 posts on Telescope. In order to accomplish this, we had to install all the project dependencies before running the npm start command on the project. Setting up a project has such potential to be a rather onerous task; something compounded by a lack of documentation. Luckily, there was a CONTRIBUTING.md for the project which made things a little easier. By following their environment setup instructions, I was able to set up the project successfully. This was immensely helpful as I hadn’t heard of some of these technologies before. Admittedly, there was some confusion and I wasn’t entirely sure I was doing everything properly. However, after a couple of restarts and reading over the documentation for the technologies over, I was able to get port 3000 to run the API.

Next, we had to modify our link checker project to go through each of the blog posts returned by the API and search each of them for any dead links. I had a bit of experience working with APIs from the web class so I knew a little bit of what the flow of the program was going to be like. Since my project had an existing command (readwebsite) that reads a website and looks through the returned HTMl, it was mostly a matter of finding out how to read the JSON returned by the API and repurposing the readwebsite code to read through the blog posts.

While I was testing the code, I ran into a bit of a speed bump. With my fellow students writing about using http://localhost:3000/posts, it was being picked up by linkReaper and tested. This had to do with the way I was searching through the files; through regex. I looked for a way to avoid picking up on the links and came across BeautifulSoup a python library for “pulling data out of HTML and XML files”. This did indeed pick up on the links in the HTML files. However, there was also the issue that some of the links in the href tags weren’t complete, For example, they were missing the protocol. This will require a bit of redesigning to figure out how to properly address the incomplete URLs. I’ve created an issue so that this may be addressed in the future. In the meantime, the regex stays.

It was nice to be working in Python again, I’ve missed working on linkReaper. It was also interesting to be working with the Telescope project. It’s the biggest web project I’ve touched. It was interesting to see some familiar usernames and profile pics from the DPS909 slack sprinkled around the issues in the project. Since Release 0.3 has us all working on Telescope, I’ll be soon to join them.

--

--