Skip to main content

A tour of the Indian Institute of Science, Bangalore


Last week I had an opportunity to visit the Indian Institute of Science in Bangalore for 5 days, and I must say after the trip, I absolutely regret flunking KVPY! For those of you who don't know, KVPY is an add-on test given by students during their amazing days of JEE Preparation. The point is, I really loved the campus, and the people I networked with. I was present there to attend a Workshop of High Performance Computing, and other related fields to it.


A big thank you to Swapnil Parekh who recommended and invited me to this course. At first, I was really hesitant to participate, since there was nothing in that course that could directly be of use to me, as a student, or as a developer with my current Tech Stack experience. Still, you never know what good might come off it and after all, it's something new that I will learn. It could be a great experience. Yeah I know the reasoning's not pretty convincing, but I went for it anyway.




My review of Bangalore wouldn't really be negative. It's a fairly polite city that could use some more Hindi speaking people. I spent really less time exploring Bangalore as compared to exploring the campus of IISc. I must say, it's a fairly large campus, a little bigger than my home campus, NIT Hamirpur. I had a lot of fun cycling through its many connected lanes!


Let's talk business

Obviously, most of my time was spent on the purpose of the visit, learning about High Performance Computing! I am planning on writing seperate blogs for each of them in detail, so here's an overview.

Parallel Architectures

The course began with a great lecture on Parallel Architecture and the need for it. Since the workshop was open for people from all industries, the introduction was mainly a general topic. I could understand most of it because of my recent Computer Organization course in the previous semester. We were taught mainly about Instruction Level Parallelism, Superscalar Parallelism, Memory Management, etc.
I think these topics would mainly interest my peers from IIIT and NIT since these are all the topics still fresh in our minds. I saw the practical use case of Flynn's Classification Schema in Multiprocessor Architecture. Well, multiprocessors do seem really fun and exciting until you realize that all the parallel processing would go for a toss if you have a lot of inter-related computations on the data. In the hands-on session, there were certain case questions when I thought that it could be better if we could just spare the trouble and simply do it on one processor!
Anyway, AI/ML engineers would know the advantage of so many processors and GPUs. With a lot of data, come a lot of GPU requirements. Not a big word, GPUs are basically used to outsource computations from the main CPU. They are processing units, which are used when we have a lot of data to process.
If you have m different processors and a loop that is running n times, you could parallelize it and from a complexity of O(n), you now have a complexity of O(n/m). More the number of processors, lesser will be the time for computations.

Now the question is, how do we get away by simply using the phrase parallelizing the code. How do you do that?

Parallelization Principles

Once we had studied the architecture, it was time to learn how to structure our algorithm and code in such a way so that we could maximise processor utilisation. The upcoming days also provided a hands-on session to us where we could run our codes on the clusters of the SERC Department of IISc. In certain cases, where our computation is always using a predictable section of data, it is indeed satisfying to see all of the n computations getting executed seperately in one go. But, consider a complex situation such as sorting, and things start to take an interesting turn. 

OpenMP

This was the lecture where I learnt most about processors and cores. What if we could make use of multiple threads on just one processor, using multiple cores of our PCs? Most of the applications that we use do make the use of Quad Core or Octa Core Processors, but our naive coded programs usually run on just one core, on a single thread unless we ask the compiler to graciously do it using a library called OpenMP. Processor will have a shared memory (unless otherwise specified) among its many cores and multiple threads updating the same piece of data will give rise to a race condition. So we will need to define a critical section. Yes, a topic borrowed from my Operating System's course. This course was like a flashback test of my previous semesters on many levels.
I think I had the best time during the hands-on session using this library. 

MPI

Now comes the real deal. A processor, or a cluster's node can have limited number of cores. But if we are talking about supercomputers, (and also clusters), they have multiple processors. In architecture, groups of processors or cores form a node, and each node has an associated memory with it. If we want to use more processors, they will have a seperate memory. So if a data is updated by one processor, the other processor which might need that updated data cannot access it. Unless, we send that data from one processor to another processor using a network which connects them.
That, is why we need the Message Passing Interface (MPI). To specify the data that we need to send and the data that we want to receive. It really has a lot of interesting applications, and people who are coding code to be run on multiple processors, that's the thing for you.

OpenACC

We can also use GPUs for accelerating our computations. OpenACC is a library used just for that. With multiple cores using the same memory, it is kind of like OpenMP, except OpenACC directs the compiler to outsource the computations to the GPU attached. I think this topic would be very important for AI/ML enthusiasts, while training models, or while writing internal code to train models.

Introduction to Big Data

Some brief introduction to Big Data and ML topics and how we can use all the above to get better results spurred my inner curiosity towards this amazing field. Anyway, if you're reading this and you have studied Machine Learning, there's not really anything here that you won't know about. And if you're new to the field, there's still I can't tell you much about it :)


Well, that concluded the Workshop, and we finally left the beautiful city of Bangalore with a ton of knowledge and experience with us.

I would like to thank the Supercomputing Education and Research Centre(SERC) for conducting this workshop, and would also like to thank the professors and industry personnel who contributed their time to this. A special thanks to Professor AdityaProfessor Akhila, Professor Yogesh Simmhan and Professor Govindarajan for this amazing workshop. 
We had a great time!

 




Comments

Popular posts from this blog

Namaste JavaScript Quick Notes

Note:  Akshay Saini's Namaste JavaScript is probably the best course for JavaScript developers out there. These are my personal notes that I made while watching the course; they serve more of as an online quick reference for my understanding and revision, and I hope it benefits anyone reading it too! Everything in JS happens inside an Execution Context. Before a JS code is run, memory is allocated and variables are set as undefined   , and functions are set as their exact code in the scope within the Execution Context. The global execution context hosts all the global variables and function definitions. An Execution Context has 2 components: Memory, that stores variables and functions; and Code, that reads and executes the code. Call Stack maintains the order of execution contexts. Since JS is single threaded and asynchronous, at one point of time, only one function is executed which is at the top of the call stack. For each function, an execution context is created before executi

An introduction to APIs

API is an acronym for Application Programming Interface. Let's start with first defining some basic terms: Browser: These are browsers. To visit any website on the internet, you need a browser. Server: Hmm, this is tough. In simple words, server is a computer. Yes, just like the laptop, or PC at your home. The only difference is that it does not have a screen. Of course, there are other differences in technical specifications, but at its core, the server is just, simply, a computer. That's it. So why is it called a server? Because it serves . When you go to a website like google.com , your computer connects to the internet and gets you your search result. But your computer's internet connection has to get that result from somewhere, right? If the google search result is giving you some answers, the answers have to come from somewhere. What is that place? The answer to that some place is: a server. When you click on the search button on google, or hit enter after typing, &q

Review: Nestjs - Finally a scalable way to build APIs

I have been thinking about this for a long time. There HAS to be a defined way to build APIs in a scalable way.  If you have used Node, Express, etc in your side projects, you might have felt that after a point in development, debugging truly becomes a pain. Sure, enterprise-level API codes are great. But a lot of times, these configurations are too much, and probably not even needed in other projects. To be honest, I haven't seen a lot of Open-Source API codes either to make a judgement on how experienced developers build their APIs. Anyway, I came across an amazing framework recently, and I think if you are coding a complex API, this should be your way to go. Nest.js Nest.js is a framework for building efficient, reliable and scalable server-side applications.  You essentially break your APIs into controllers, services, and modules, which allow you to modularize the smallest of functionalities in your endpoints or the API as a whole. Why is modularizing important? As I have talk