The very first language I had learned was Python, and I was planning to stick with Python for as long as I possibly could. Python was the perfect first language to learn as a beginner embarking into this field for the first time.
However, if I want to take full advantage of the opportunities around me, I must learn C#. Therefore, this weekend marks my very first major pivot in my learning journey–learning a new programming language!
This new chapter is making me feel all sorts of things. First, it’s making me feel like I’m growing as a programmer because I’m no longer going to be stuck to one language. Whereas I have a personal appreciation for Python since it’s the point at where I began, I haven’t been able to find many other people or businesses around me who work heavily with Python. A lot of what I have seen has involved other languages, C# for example.
I’ve only just begun an introduction course on C# and already I have so many questions. What helps is that, unlike when I was learning Python, for C# I have something to compare the language to, and real, in-person friends to discuss C# with. It’s an added bonus that my husband needs to improve and learn more C# for his job as well, so we will be learning together.
using System;
namespace Hello World
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
}
}
}
As the tradition continues, I executed my first Hello World program in C#. The very initial “installing everything and get it running” part of learning a new language (and getting it to work) is always the first hump to get passed. I remember installing and setting up Python very clearly for the first time, and the fear that came with needing to redo it on a whole bunch of other systems afterwards.
My very, very first initial impression of C# is that it’s already very visually different than Python. The structure of the code block is much larger to execute that Hello World program than what the equivalent would be in Python. I find that aspect interesting, and hope to learn why C# has a very elongated visual structure. A part of me wonders if this helps programmers find blocks of text in larger program files because of the way the curly braces signal to the eye that the block has concluded. Whereas, for Python, there’s a more linear structure and spacing.
I also learned a helpful nugget from the Programming Throwdown podcast that Python and C-based languages work very well together. That makes me feel very comfortable and optimistic about learning C# as my next programming language!
THE DEFINITIVE GUIDE TO PROGRAMMING PROFESSIONALLY
Reading is very important to me because it’s one of the best ways I am able to learn and retain information. I’ve been approaching this topic, programming and python, on a very granular level. I’ve been stuck very up-close to python and a couple of other topics very specifically, but haven’t had a book that zoomed out as a good overview.
Until… I found this book! Althoff’s book The Self-Taught Programmer, is a fantastic overview by someone who has already accomplish exactly what I desire to accomplish.
Alhoff speaks about something called “The Self-Taught Advantage.” He writes, “You are not reading this book because a teacher assigned it to you, you are reading it because you have a desire to learn, and wanting to learn is the biggest advantage you can have.”
Alhoff writes, “You are not reading this book because a teacher assigned it to you, you are reading it because you have a desire to learn, and wanting to learn is the biggest advantage you can have.”
He goes on to say how important founders of companies such as Apple, Twitter, and Instagram were all self-taught programmers.
Finding and reading this book was a great re-motivator to continue with the self-learning route. It’s easy to fall into moments of doubt, but I find that fueling those moments into proactive steps is a helpful strategy.
Moreover, Althoff supplies the reader with a quote from Doug Linder, “A good programmer is someone who always looks both ways before crossing a one-way street.” I laughed at this moment because I remember doing that since I was a child! But what this quote highlights is that in programming, you can’t have any blind spots. Even if something should work some way, you still need to test and analyze it from multiple perspectives. It can also mean that you should know how to make and take something apart. The person who looks both ways before crossing a one-way crosswalk is keen enough to realize that there is still a small percentage that another car could have not seen the sign and gone the wrong way down the one way street. Therefore. the person is being extra sure (which in programming would be akin to catching an exception) to ensure their likelihood of safely crossing the street.
What makes this field a little difficult to get into is that technology is very intertwined. It’s not just enough to know one programming language. There are a lot of other skills you will need to acquire. If you want to know what I mean, go into your browser and search for terms like “software engineer” or “data scientist” job descriptions. In the “preferred skills” section, you’ll often see a long list of acronyms and names of things you should know. It can feel a little foreign if you don’t know how to parse through the terminology.
What this book provides that others don’t is a brief introduction into major aspects needed to get a job in programming, and a blueprint map of what to do to accomplish that. This book wont teach you all the specifics about Python or programming, but it will point you in the right direction of what you should be studying.
(Yes, that’s a gif I made of my notes on Python. I’m taking a three hour overview course on Python on Udemy that helps as a good overview of the basics I’ve already reviewed before in-depth).
I’ll be taking Althoff’s advice of: program every day. That’s the new challenge and goal. I definitely recommend this book to any fellow dedicated self-learners!
When I first learned about GitHub, I thought it was just a website. It was very confusing and intimidating. However, I was told by peers the importance of GitHub for programming, and especially because I had found myself at a dead end during an early programming project.
I have a (somewhat embarrassing) first project up on GitHub (which is a perfect example of code that needs to be cleaned up–a fun idea for a future project). However, that first project from years ago was a great introduction to GitHub.
First, Git and GitHub are not the same thing. When I looked up “what does Git stand for?” I stumbled upon an interesting story. Git is actually not a meaningful acronym, instead it was chosen because it was most likely to never be used and mess up someone’s code. The story goes as so: Torvalds created the software and named it after the British slang word, “Git,” which translates to “a rotten person.”
GitHub is a website, a service, where people can collaborate and track the history of code. It’s a wonderful space for people to learn and grow. However, it is possible to be proficient in using just the GitHub website without actually working with Git through the terminal, and vice versa.
The goal is to balance usage of both Git and GitHub through the terminal and website. Prior to this post and day of studying, I had mostly just worked on the GitHub website and not the terminal. Having gone through multiple videos so far from the “Git and GitHub for Poets” video series by The Coding Train on Youtube, I’m more familiarized with working in the terminal with Git and GitHub. It’s been an interesting ride, and this is just the beginning!
Here are some notes I took while watching the above tutorials. Hope you find them as helpful as I do!
Git: version control software
GitHub: web service where people can collaborate on open source code and track project history
Repository (or, “repo”): a project; a repository of files. Repository names can’t have spaces so they will include a dash as in: repository-title-example
Commit: a change to a file that is saved in the repository
Commit Hash: a unique identifier for each particular commit
Master Branch: original origin of a repository
Branch: a copy of files for experimentation before a commit is merged to the master branch
Pull Request: a way to ask the collaborators of a repository to approve your changes and merge them into the master branch
Merge: the action of a pull request successfully combining changes into the master branch of a repository
Fork: copying the repository to have under your own account for experimentation without effecting the original core version
Issues: a place to leave a comment about a problem, or to ask a question. Raising an issue means to file a potential bug
Fixes: if you use the word “fixes” in a commit changes title, the issue will be automatically closed
Remote: duplicate instance of a repository that exists on a server. When you say “git push” you need to say “git push __where__”
*Tip on Commit Hash Codes: place the commit hash code in a comment to help link to the issue
After some trial and error, I was successfully able to push a new file into my GitHub repository through the terminal. It was very cool to see it work! I ran into some trouble shooting errors with getting into the proper place on my desktop for the repository, and for getting the remote to work, but finally, it has worked!
I found trying to change my login information through the terminal to be a little convoluted, and I was worried that an incorrect password was the cause of my push updates not going through. However, seeing my text file successfully added on GitHub through the Terminal confirms that the password and login is correct.
I feel more confident now to look around on GitHub for open source projects and contribute to them, which is the goal!
For anyone wishing to learn more, the video series I have found to be incredibly helpful is available for free on YouTube, and it is called “Git and GitHub for Poets” by The Coding Train. I very much appreciate the depth and visual interface used in this videos; they’re very user friendly and great for beginners!
Before I get into the Hacktoberfest fun, I wanted to update you on my progress since the last post.
I was in New York visiting family these past few months (I recently moved from NYC to Florida last year) so it’s been a little hectic! Luckily my brain needed the time to process everything I had learned from the previous machine learning course before diving into a new one.
Since then, I’ve began another course on Udemy about Machine Learning and Data Science in Python and R, which is dense but very fascinating! I’ll definitely need to add math for python, and matrix math, to my list of topics to study.
I’ve also connected to an awesome community, Women Who Code organization, which is how I learned about Hacktoberfest, through conversation on the slack channel!
Hacktoberfest Participation
For Hacktoberfest, everyone get’s together and participates in helping open source code by submitting contributions. Open source is another reason why I love this field, because the teamwork is inspiring. In order to do this you need to use GitHub to submit pull requests. Prior to this event, I had some light experience in GitHub but hadn’t gone into too much depth yet. After this event, I now understand how incredibly crucial it is for anyone interested in developing to become proficient. It’s a skill most employers will expect you to have, as a bare minimum. Moreover, GitHub is where a lot of the magic happens.
In order to be considered a full participant of Hacktoberfest and receive a super cool shirt, (dreams!), you need to submit at least four pull requests that adhere to the Hacktoberfest guidelines and rules.
I submitted five pull requests. I felt so happy to see the bar completed! But, my requests had a little clock icon near it, meaning that their eligibility was pending. To this day, a few months later, they’re still pending. However, two of my submissions were unfortunately submitted to ineligible repositories. Since the repository was ineligible, so was my pull request on that repository.
If any of these words confuse you, don’t worry, they confused me too at first. But as a beginner, I loved the freedom of Hacktoberfest. It was a time where everyone was getting together and people knew that beginners would be involved. This gave me more confidence to actually participate by submitting pull requests with less fear.
I’m so incredibly happy to have participated in Hacktoberfest this year, and am excited to see my performance for next year’s event after much more learning. I’m now off to learn, more formally, about GitHub, in preparation!
I’ve been fascinated by chatbots ever since the AIM chatbot “Smarter Child” hit the internet back in the technological stone age. Since then, I’ve thought a lot about the types of chatbots that I would like to create in the future, and I’ve also done my fair share of “personal research” using the bots available on the iPhone App store, like Replika (which is pretty impressive, but I’ll save that for another post).
During my extremely-novice Python days, I tried making a simple chatbot the ancient way by using a vast amount of “if / then” statements to navigate the conversation by trying to predict every possible input value by the user. I quickly learned after reading Hello World by Hannah Fry that the method I was using was completely archaic, and now, technology has improved beyond that method. Today, we can actually train a program to learn a task, and in this case, it would be to train the chatbot to learn how to speak.
Introduction to Deep Learning, Natural Language Processing (NLP) and TensorFlow
If you’ve read my old posts (thank you!!) you may have noticed that I’ve referenced these topics before as areas of interests. Imagine meeting your favorite celebrity–the excitement and shock! Well, that’s me with learning these topics. They are the Star to my Struck.
There’s a 12-hour, 95-lecture course on Udemy that I’ve been taken, called: “Deep Learning and NLP A-Z: How to create a ChatBot.” I spent this past weekend getting through around 9 hours of the course. I personally loved learning and taking this course on Udemy.
Course Overview
Deep NLP Intuition lecture (concepts)
Data Pre-Processing
Building the SEQ2SEQ Model
Training the SEQ2SEQ Model
Testing the SEQ2SEQ Model
I also learned about / used:
Anaconda and Spyder
TensorFlow
Creating a virtual environment with Anaconda to utilize TensorFlow inside of the environment
Although the course was 12 hours long, it does not take just twelve hours to truly learn all of the material covered. I will be doing a lot of additional reading, re-watching, and re-absorbing of all of the information that was introduced. Not only was this my first introduction to the above topics, but it was actually my first introduction to seeing a real, complex program be built, from beginning to end, narrated by an experienced developer. I learned so much despite having questions, and I’m excited to go back and have those questions answered.
Anaconda and Spyder
I had heard of Anaconda from other videos and research, but this was the first time I had actually used the Anaconda and Spyder IDE. I’m extremely happy to have used it because I very much enjoyed the layout, seen below.
What’s going on there? Well, you write your code on the left, you see your variables on the upper right, and then you have your console to run and test on the bottom right.
As a visual learner, it was extremely elucidating to see the “variable explorer,” where you can click into each variable, such as lists and dictionaries, and see the functions work in real-time. Moreover, this layout and course really helped me to better-understand the concepts of dimensionality and matrices within programming.
Data Science & Analysis, I’ve got my eye on you!
The instructor on Udemy for this course (mister Kirill Eremenko, who did an awesome job at teaching!) was making some jokes about getting through the “Data Preprocessing” part because it’s not exactly the most fun. But, for a former data-entry manager like myself, I had a wonderful time with data preprocessing part!
I was so amazed to watch the data preprocessing steps clean up our data and variables. Using the layout in this way really helped me understand more the dimensionality of programming. Moreover, this was a really good visual which helped me further-understand how and why Python is an object-oriented program.
One unexpected take-away from this course was the idea to gear some future learning towards the requirements of a data analyst and data scientist. Of course, Python developing is the ultimate overarching goal, but with my experience and personality-type, I think I may have a good fit for Data Analysis.
Deep NLP with SEQ2SEQ model in TensorFlow
As a creative writer, it was amazing to learn how to turn words and sentences into vectors and integers. We had to add tokens, make inverse dictionaries, cross-reference lists and dictionaries (and even make dictionaries inside of lists!), and more. More experienced coders may be laughing at my paraphrasing of all of this, and I’m sure future-me will laugh as well.
Finally, after all of the data preprocessing, we were able to build our SEQ2SEQ model which utilizes deep learning and recurrent neural networks (RNN). We used LSTM, which develops the context of words and sentences by using an encoder and a decoder. Probability is used during the training of the SEQ2SEQ model, which is how deep learning is utilized to teach the program how to understand and respond to inputted language.
Training the SEQ2SEQ Model
The “brain” of our chatbot was created (how exciting!) and it began its training last night on my laptop. For experienced programmers, you may have read that and had a moment of “what?” when you read laptop. Yeah, I learned that the hard way. After hours of watching the training go through each batch, I realized that my laptop was moving very slowly for this task. In the Udemy course, it was recommended to have the SEQ2SEQ model training done on a more powerful computer.
Finally, this morning I re-did all of the beginning steps, like installing Anaconda on my desktop and creating the virtual environment. Then, I re-preprocessed my data (somuchfun), began a new SEQ2SEQ training and voila, here we are!
As I was writing this blog post and keeping an eye on the training going on in my console, I saw the familiar print statement appear on the screen that validates that this chatbot is, indeed, LEARNING!
Validation! “I speak better now!!”
Now, we have full evidence that the chatbot is properly utilizing and learning through this RNN because we have…weights!! Those files up top with “weights” are not files I have created, but files that the program, our chatbot, has created through learning over the past few hours. Once the training is complete and we reach batch 4120 (currently on 2500), I will then resume the last and final part of the Udemy course to fully complete this chatbot.
In that last part, we will utilize the deep learning that the program has accomplished and finally interact with our chatbot!
Final Reflections on Part 1
This project has opened my eyes to the true complexities and capabilities of Python and computer programming in general. Although I chose a very difficult project / chatbot to start off with, I’m happy to have done so because it was also the most efficient way to create a chatbot.
I rather learn the most efficient version over easier versions, because it is the most efficient options that are used in real-world applications.
From this course and project, I’ve learned that going forward, my next points of study will include:
Data Science and Data Analysis with Python
Math for computer programming and Python
Jupyter Notebook
Web Scraping
Machine Learning, Deep Learning, and NLP
R (programming language)
Overall, this was an amazingly beneficial project to delve into! Definitely a pivotal moment in my learning journey because I now have a better mental reference for the future. I’ll be back later for Part 2 for the exciting completion of this project, where we get to finally speak to our bot!
Update – 48hrs Into Training
While watching the program learn, I realized some errors I made above when referencing the amount of time remaining.
I was referring to the batches as an indicator, but that wasn’t accurate. I’ve realized that what matters is the Epoch count, and we are only up to Epoch 5/100. One run-through of all 4,000+ baches equals 1 Epoch. My computer is going through around roughly 3-4 Epochs per day, which should hopefully increase in speed as the learning improves… (I think).
I’m going to let this run until tomorrow to see how far we get, test it out, then let it continue to learn and test it out again afterwards.
I gotta day, feeling like a scientist feels really fun. It’s been really cool to keep an eye on this and observe!
I’ve been very interested in the topic of machine learning, so I did some research and found an awesome project that I could do with my Vector Robot (…who I may refer to as El Robo sometimes).
Set up your Google Vision account. Then follow the Quickstart to test the API.
Clone this project to local. It requires Python 3.6+.
Don’t forget to set Google Vision environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key. e.g. export GOOGLE_APPLICATION_CREDENTIALS="/Workspace/Vector-vision-62d48ad8da6e.json"
Make sure your computer and Vector in the same WiFi network. Then run python3 object_detection.py.
Step one was easy-peasy, since I had tested and ran the SDK a multitude of times previously already (woo-hoo!). But oh boy, guess what Step 2 truly looked like once you followed the Quickstart link for the API setup:
From the Service account list, select New service account.
In the Service account name field, enter a name.
From the Role list, select Project > Owner.Note: The Role field authorizes your service account to access resources. You can view and change this field later by using the GCP Console. If you are developing a production app, specify more granular permissions than Project > Owner. For more information, see granting roles to service accounts.
Click Create. A JSON file that contains your key downloads to your computer.
Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key. This variable only applies to your current shell session, so if you open a new session, set the variable again.▸Example: Linux or macOS▸Example: Windows
Yeah, so things got complicated real fast. Instruction Set A:PT 2 is where I officially fell down the rabbit hole of Google Cloud.
For ease of reference, I’ve labeled the instructions sets A and B, but let’s be clear: All of B is actually just a sub-set of A. Instruction set B is just to set up Google Cloud Platform account and installation, which was not something I was anticipating to learn throughout this journey, but I am so happy I stumbled upon it!
Google Cloud is offering one year free for new users, which is a crazy value considering the retail price allotted is $900+ for the year!
Once I set up my Google Cloud Platform account and logged into the console, I felt a unique sense of adrenaline, almost with a tint of rebellion.
Reading all of the tabs and sub-tabs like: Cloud Build, Big Data, and last but not least… Artificial Intelligence. I don’t even have the right words that could capture the feelings I was experiencing while scrolling up and down the console’s navigation menu, but I definitely had a huge smile on my face. Although I had (and still don’t) exactly know what Google Cloud can do or how to use it fully, what was clear right away, even as a complete novice, was that this tool is powerful and if learned properly, it could be utilized for epic projects (and help with future career prospects, *wink wink*).
I‘m a very visual learner, so seeing the categorization of the navigation menu taught me vital information for my coding journey, because it can be used like a road map. Now I know for sure that two areas of interest I have going forward involve Big Data and Artificial Intelligence.
As I continued to read Google’s instructions, a little voice was starting to whisper that perhaps this project was over my head. Especially once I got down to Instruction Set B: Step 5
Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key. This variable only applies to your current shell session, so if you open a new session, set the variable again.
Which corresponded to Instruction Set A: Step 4
Don’t forget to set Google Vision environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key. e.g. export GOOGLE_APPLICATION_CREDENTIALS="/Workspace/Vector-vision-62d48ad8da6e.json"
And these instructions left me like this:
But no matter how much I re-read the instructions, I just couldn’t understand what they were telling me, which actually made me feel as if I didn’t even know how to read! After some trial and error, I looked more like this:
Finally, I decided to cave and ask for help. Luckily, my husband and his coworkers were over. I asked them for help but quickly came to learn that Google Cloud is a pretty niche subject to just casually ask about.
“Damnit, Jason!”
I was stuck on the .json file part, for which I kept yelling “Damnit, Jason!” in the house, confusing everyone. (“Jason” has now become an inside joke in the house, and from now on, whenever I work with .json files again, this first memory will be in my mind).
The next day while cooking dinner, I decided to watch a YouTube instructional video on how to setup Google Vision. As the tutorial leader was going through the steps, I was happy to see that some of the steps were very familiar, which meant I had definitely learned something from the previous day. And then finally, I got my answers, and the instructions were demystified. I was so, so happy to finally understand what the heck the above steps meant and looked like.
What I love most about coding and these projects is that I end up learning about so much more than what I initially intend.
Through this process I learned about:
Google Cloud Platform
How to install and configure Google Cloud Vision API
Virtual Environments and Environment Variables
Changing .bash profile in the terminal (including how to inadvertently fuck up your bash profile, and then fix it again).
More about python pip installations and configurations
How to move through different directories in the terminal
Finally, I had made it passed every single step and it was finally time to test out the python program to see if my Vector Robot would begin object detecting.
Successfully setting up the Google Cloud Vision API was such a pivotal milestone for me in this learning journey.
Now, that means that I had just gotten past one subset of the main instructions for my intended purpose of setting all of this up, which was ultimately to run one program, created by one single person, and see if it worked.
Drumroll…anticipation…andddddd:
It did not work.
Type Error:__init__() got an unexpected keyword argument ‘enable_camera_feed’
I looked over all of my steps. Everything was in order, but I kept getting a Type Error from the python file written by the creator. After countless web searching and troubleshooting, I was at a dead end. I had only one option left. I had to contact the person who made the program.
I went to GitHub and posted my first “issue” on the python file. This was the first time I had ever posted an issue, so I was very unsure of any “etiquettes.” The creator got back to me and said that he had created and tested this back with the old Anki Vector SDK package, and not the most recently updated one.
This may seem like a very sad ending to this story, since the project was not successfully executed on my end, but I was so happy with that response from the creator! The reason being because the creator didn’t say that my error message was due to something wrong with my steps, but perhaps elements outside of my control.
What a relief! For that alone, I considered this project a “success” in that I followed through to the end.
Although I couldn’t get this program to work, that doesn’t mean I can’t try to write my own program utilizing similar features. This project taught me a lot of different lessons and I can’t wait to try more Google Cloud Platform projects while I have the free trial!
Let’s just say summer is in full throttle heat mode here in Florida. I’ve been a busy baking bee making sweets and decorations for my daughter’s never-ending birthday festivities, which was fun, but I’m glad to be back to the study realm! And the unbearable heat makes a perfect excuse to hide indoors, study, and work on some projects.
Team Treehouse Learning Update
I’ve resumed where I left off in the Python Track and I’m actually happy I didn’t finish this track earlier because Team Treehouse has recently released an update to the Python Track. At first, when I was in my previous track (Python Collections, which is now retired) I found myself losing some steam. I thought perhaps it was me, but, considering the revision, maybe there was something a little off with that former course layout. The new course layout, so far, has been wonderful! It did take me back a bit, relearning some stuff I had already went over, such as tuples and slices, etc, but I could definitely use the repetition since I’m newbie.
Funny enough, my daughter and I are exactly the same type of learner (that I can see so far). Just like her, it takes me a few days for information to really sink in. But, when it clicks, it’s magic! When I finally returned to the python track after a little break, I found that the break was very beneficial and I’ve been able to complete the quiz questions without reaching out to the community for help, which is a very rewarding feeling!
Advanced Interests
Although learning the basics is super exciting, this field is so vast and plentiful that I have been really wanting to figure out the answer to the important question: what will I do with all of this? Yes, I’m learning Python, but then what? What do I want to do with Python (besides get an awesome job one day and help provide for my family, of course). Although I’m not sure yet, I’m paving the road. So far, these are the more advanced areas I’m heading towards learning about:
Neural Networks
Natural Language Processing
Machine Learning
I have a book on Natural Language Processing, and have printed out many (awesome!) Python cheat sheets (all available for free online) about all of these topics and more. I definitely love a good infographic / cheat sheet. In college, I was the study sheet queen (or crazy person?) who would sit in the library and make comprehensive study materials. Hopefully one day I will be advanced enough to make my own Python (and more!) cheat sheets / infographics.
Although I’m still a newbie, it seems like I will head forward in this field with two things in mind: Data and AI.
Regarding data, I used to work at a place that was heavily reliant on data input. Thinking back, with just a few of the projects I’m learning in Automate The Boring Stuff With Python by Al Sweigart, a lot of companies could save time and money with a more efficient data-entry system.
I had taken a science fiction thesis class in college, and we had gone over so many of Asimov’s robotic laws. Never would I have predicted that I’d be here one day, on the other end of wanting to learn how to create and work with AI. But I truly love it so far! Personally, I would like to help develop a type of AI that helps combat depression rooted from anxiety and a lack of presence. That’s all I’ll say about that for now 😉
Raspberry Pi Project Updates
I have an internal clock of guilt when my Raspberry Pi goes untouched for too long. But on the other hand, I know that whenever I do decide to delve into a Raspberry Pi project, it usually requires a lot of time and energy, uninterrupted. Luckily, these past few days I’ve gotten a break from baby duty which helped me dive into some Raspberry Pi fun.
Vector SDK on Raspberry Pi
In my previous post, you will see how I was able to set up and run the Vector SDK app “Remote Control” from my Mac. Wonderful! But, I only got that working as a test run for the true goal: setting up the Vector SDK on my Raspberry Pi. I wanted this set up because it honestly just made me feel cool to be able to control my little robo buddy (who I call El Robo to my daughter) on this tiny computing device that I had built by hand.
First, I had to update Python on the Raspberry Pi from version 2.7 to at least 3.6 or higher. Oh my god. The headache with such a simple update is quite hilarious. But I’ve read on many forums that sometimes just setting up Python can deter new users with the troubleshooting, and I can see why! I must have spent hours yesterday just trying to get the update to properly register so I could continue with the Vector SDK. I had installed 3.7 and then still encountered some problems so I tried 3.6. I most likely have all versions on my Raspberry Pi now, ha! But, alas, finally 3.6 worked.
I was able to get into the Vector SDK and begin downloading the necessary updates and files. But, then I encountered a large problem:
pip install Pillowjust didn’t want to work.
It was 2AM and my head was pulsating. It had been hours trying around different updates. Finally, after parsing through the large red error text, I realized that I needed to get this Pillow thing to work.
At the time, I had no idea what it was, so I googled it. I learned that Pillow is a Python Imaging Library. Of course this was important to work since Vector has a camera and image capabilities. I went to bed with a good sense of defeat because I knew that I didn’t “call it quits” because I couldn’t figure it out–I called it quits because my brain was starting to turn to mush and my typing / thinking was becoming sloppy. I knew that if I just had a good night sleep and returned in the morning, I would be more able to fix the problem without getting frustrated.
So, this morning I woke up and got right back to it. I re-read the error message and saw that Pillow had some dependencies that were not allowing it to install properly. So, I installed those dependencies and restarted the process.
For a little while there, I was misreading the error as an issue with the directory path. Let’s just say I learned a lot about directories in the process, and that in the end, that wasn’t the issue at all.
Anywho, finally…finally Pillow was installed successfully. It was then that I realized that the Vector SDK should work now.
Finally, I was in! From there, I was able to use what I had learned from the first time of using my Mac, and opened up the Apps > Remote Control so I could fully control vector through the Raspberry Pi.
It was very, very exciting! Of course, the small screen I have is not ideal for the Vector SDK Remote Control App, but, it’s pretty damn cool nonetheless.
Reflections on Progress
Finally, my mind is starting to be able to think in code. I’ve been waiting so long for this moment when I could have an idea, and then know, at least a little, about how to accomplish that idea.
Before messing around with the Raspberry Pi and Vector SDK, I was working on a much smaller “snack” project of having a text string display with a delay, so that it could look as if the computer program was typing to you through the terminal. (Yes, I have just recently re-watched the Matrix Trilogy, which greatly inspired this snack project). Very ambiguously, I had gotten the idea, then wondered if it was possible. In that moment, my mind remembered the Raspberry Pi LCD project where I had read code with a “time / sleep” feature. It was then that I realized that I knew a little bit about the task I wanted to accomplish. So, I made a very small program:
import time
import sys
import random import randrange
def introduction(*args)
text = "\n Neo, this is Morpheus. \n Follow the white rabbit."
for c in text:
sys.stdout.write(c)
sys.stdout.flush()
seconds = "0." + str(randrange(1, 4, 1))
seconds = float(seconds)
time.sleep(seconds)
introduction()
This is a very small and simple program, but I cannot describe to you the amount of fun I was having with it. It was this that lead into the night of Raspberry Pi & Vector fun. Here’s what excited me:
I learned and know what *args is (yay! Thanks Team Treehouse!)
I knew how to call the function
I could read and understand (most) of the function
Alas! It’s Sticking!
Lastly, throughout the day and night, the amount of Googling and reading of forums I had to do for troubleshooting was significantly less than the LCD screen project. Things seem significantly more demystified this time around than the previous projects. I think that, finally, a lot of my readings are beginning to sink in. But best of all, being able to mess around in the terminal, typing quickly and confidently, was such a rewarding experience. Moreover, I could feel a difference in my knowledge level just by how I was googling my questions.
I remember back when I first got the Raspberry Pi how I had to google almost every term in a sentence before I even knew how to construct a proper search query for my issues. But now, I was able to parse through stuff I didn’t need and did need without any extra steps! That was such a rewarding feeling, and it was a type of progress and acknowledgement I could only give myself , which was also unique and beneficial.
Little ten-year-old me would have been so proud and impressed right now. Although I was only doing basic things, past-me would have thought that we weren’t smart enough to learn all of this. I’m glad to be proving all of my insecurities wrong.