AMFOSS TASKS
“Success is normally found in a pile of mistakes.” - Tim Fargo
If there was ever a point in my life where I felt this quote, it is after I started doing the tasks required for joining the amFOSS club at Amrita Vishwa Vidyapeetham.
AmFOSS is an open source club consisting of a few motivated students promoting and contributing to free and open source software. This helps students to learn out of academics and get introduced to the outside world. So, quite naturally I was interested in joining the club. I came to know about this club through quora. There were a lot of posts on the club(which I found out later was a result of foss members spamming the website).
task-1,2
My first direct introduction to amFOSS, however was through the induction ceremony organized by the college. I registered for the club immediately, and soon got the tasks. At first, I was feeling quite good about the tasks. Task 1 was a breeze, all I had to do was run an automated script on GitHub. Task 2 was a bunch of typical programming challenges. I had learnt python during my vacations as I had way too much free time on my hand. That came in handy and I did the challenges using python.
TASK 1
TASK 2
task 3
After these, I started working on task 3. This task was very close to making me successfully pull my hair out. I had to create a program for scraping google search using ruby and nokogiri. I read through most of the documentation of ruby and nokogiri. I also scraped a smaller website as a part of a blog post tutorial. So I was feeling quite confident in cracking this task; which was until I actually started doing the task. I had underestimated the complexity of a google search page, and no matter what I tried, I could not parse any sort of useful result from it. I immediately pinned this on my lack of knowledge of HTML and CSS. To rectify this issue, I started learning HTML and CSS from an online source. It took me about two days to learn the syntaxes of HTML and CSS.(Even though it was explicitly mentioned in the pdf given to us to not get sidetracked into learning a whole language, I couldn’t stop until I felt that I had knowledge of the basics). This did give me clearer understanding of the problem and made the use of nokogiri much easier, but at the end of the day, I had no results to display. It almost seemed to me that the google search page was built in a way to prevent scraping. That is as far a progress I got in that task.
TASK 3
task-4
Dejected at not being able to solve the task, I moved on to the next task, “Advanced XOR”. I was completely new to encryption of all sorts. So, I read about encryption and learnt what the terms key, check hash, ciphertext stood for. Then I proceeded to read the encryption script. I tried doing it for a bit of time, but then I simply could not understand it.
task-5
This brings us to the next task. I started this task with apprehension because graphQL was a relatively new language. I started out with reading about API’s and the different ways they are used to query information. Then I read about rEST API, I understood that graphQL was an improvement to rEST since several instances of data could be returned in a single query compared to rEST which requires multiple queries to get the same results. After aquiring this knowledge I started working on my website. This proved to be relatively simple and I finished it in about a day. I also read about graphQL on multiple websites and since there wasn’t much information on it, I relied heavily on it’s official documentation. Graphiql explorer was a fun way of trying graphQl and I started experimentation. Since the syntax was easy, I got the query working there. This is where I ran into my blockers. I had two main blockers in this task and they haunt me to this day. They were -
i) Authenticating the query from javascript.
ii) Implementing graphQL in javascript.
This took up about 1/3rd of my total time and was a huge pain in the ass.
I looked at a variety of libraries and clients to resolve the issue. They include graphql.js, nodejs, apollo client for graphql. I even went as far trying to execute it through a python script using django after reading through the method in which they implemented it on GitLit repositary on amFOSS directly. Needless to say, I learnt how not to approach an issue through this task. I wasted a lot of time on this that could be used in other tasks.
TASK 5
task 6,7
I made negligible progress in tasks 6,7. All I did was study the syntax of rust and installed it on my laptop.
task 8
Captcha breaking was a very simple task and it was a welcome addition after task 5. All I did was install a couple of packages from google.(Tesseract OCR). After that it was fairly straightforward to get the text from images using the OCR.
TASK 8
task 9
Creating a website using jekyll themes was also pretty straightforward. Here, the knowledge I got by learning HTML, CSS came into handy and editing the website was a breeze. I found out this super cool minimalistic theme from jekyll themes, forked it and got a website without much fuss.
TASK 9