Skip to main content

Building a Lexical Analyzer - Compiler Frontend I

Hey guys, today I'll be building a fully functional Lexical Analyzer that detects unrecognized tokens in C++!

Beginning:

The foundation of this program is derived from another program that counts different tokens in a C/C++ program. While I will try to explain all the basics of Lex here, if you find yourself stranded, you can refer the above link too. Once you understand the method of identifying tokens, the following program is just the converse of it!
Here's a basic overview of what happens in a Lex Program:



Some basics about a Lex Program

Lex (FLEX) basically iterates through your entire input string, trying to match everything it finds in the rules section. Once it finds the pattern, it executes the set of lines listed within the {} brackets.
So you can consider the "rules" an informal way of function declaration.

Rule Section in a Lex Program:                         Rule Section in a C/C++ Program:






It is important to note that Flex does not scan the input character-by-character. Remember, the rules section is just like declaring a regular expression. 
And if you have completed your Second Year, you might know the answer to What identifies a Regular Expression? 

A DFA.  

So the input string is validated as long as the character sequence that are coming in, satisfy the rule. That is why, the rule [0-9] is different from [0-9]+ 

For a more comprehensive information about the rules section, I would direct you to refer GFG's page on FLEX.
If you want to further understand the inner workings of a Lex Program, refer the 'Lex - Theory' Section of this PDF

The Code:



The Explanation:

So now that I've shown you the code, let's walk through the process of building a Lexical Analyzer. 

When should we output an error?
As soon as we see any non-keyword string. Like, 
fore(int i = 0....
Yes, that should give an output as an unrecognized token, while all the other keywords, operators, seperators should be accepted as perfectly valid tokens.

Yet, we are forgetting something incredibly important: identifiers.
Ofcourse! Right when you would write int a, it would return an unrecognized token. That shouldn't happen.

So now we need to make sure that as soon as the 'int', 'float', 'double', etc keywords are encountered, the subsequent strings are passed/accepted until the ';' is encountered.
int a, b, c;
To do that, I added an int variable in the Definitions Section which I'm using as a boolean value to toggle the acceptance of identifiers. You can see that in the rules section of "int" & "float", I'm setting the identify variable and in the ";" section, I'm resetting the variable again.

If the lex program encounters anything other than keywords, operators & seperators, it will now check if it is accepting identifiers through the identify variable.



Great, problem solved. Or is it?
We've solved the integer declaration problem, but what if someone tries to initialize the variable after declaring it? No answers for that!
a = 5;
So we need to save the declared variables, like a...(yes, you guessed it right!) Symbol Table. The main crux of the Compiler and we've finally realized its importance through our implementation failures!

Ideally, to construct the Symbol Table we should build a map. But since the code sections should be in C, I went ahead with using a 2D Character Array in the program, just to store the string values of the identifiers. This can be further improvised to build a Hash Table, but for now, let's just focus on accepting the right tokens.

So when we're deciding if the token is an identifier, all we need to do is just add the identifier if the identify variable is set, or else,
check the 2D Char Array for the occurence of the String in it. If it is present there, accept it as an identifier.
If all this fails, its an invalid token. You're free to output the most savage errors you can imagine.

Note:
If you look at the program, you might still not find all the keywords in it. You are free to add all the tokens in the "while"|"if".... section. It's a child's play now that you've understood the working of a Lexical Analyzer, the first part of the Compiler FrontEnd 

Comments

Popular posts from this blog

Namaste JavaScript Quick Notes

Note:  Akshay Saini's Namaste JavaScript is probably the best course for JavaScript developers out there. These are my personal notes that I made while watching the course; they serve more of as an online quick reference for my understanding and revision, and I hope it benefits anyone reading it too! Everything in JS happens inside an Execution Context. Before a JS code is run, memory is allocated and variables are set as undefined   , and functions are set as their exact code in the scope within the Execution Context. The global execution context hosts all the global variables and function definitions. An Execution Context has 2 components: Memory, that stores variables and functions; and Code, that reads and executes the code. Call Stack maintains the order of execution contexts. Since JS is single threaded and asynchronous, at one point of time, only one function is executed which is at the top of the call stack. For each function, an execution context is created before ex...

How Kafka replaced Zookeeper with the (K)Raft algorithm?

Back in 2021, when I first came across Kafka, I remember the DevOps engineer in my team using terms like Zookeeper, broker configs, etc on our team standup calls. I remember not caring about those terms, and simply focusing on learning about the producer, partitions, topics and consumer groups, and how they could be used in the product my team was developing. While platforms like Kafka were built to abstract certain aspects of distributed computing (replication, consistency) while storing & processing logs, it's a pity how so many of us miss out on knowing the amazing engineering that went behind to build the different parts of a platform such as this one. 4 years later, I'm hungry enough to reverse engineer one of my favourite distributed platforms - Kafka! What did the Zookeeper do? To quote the 1st  Kafka paper from 2011, Kafka uses Zookeeper for the following tasks:  (1) detecting the addition and the removal of brokers and consumers,  (2) triggering a rebalance ...

"Hey Google" get me a new T-shirt

Everyone loves Google for its amazing technology and creative workspaces! Guess what? Google loves its developers as much as the world loves it too! And yeah, you don't need to be an amazing programmer to be a developer in Google's community. All you need to do is to spare 30 minutes, just once, and maybe have some creativity! That's it! Oh, and you should be really checking your mails periodically, although if you don't currently have this habit, your excitement would develop that for you. What do we want? So you arrived here to know about getting a T-shirt. Would you also like having a Google Home ? Yup, that is also something you could get through this. And of course, as I mentioned earlier, an entry to Google's Developers Community Program! There're a lot of perks for it but let's first talk business. What do we have to do? In a nutshell: Make an Action for Google Assistant .  But what's an Action ? Action is a feature, or a sub-applicat...