Tree-sitter and its Query

Tree-sitter is a parser generator, that is, we can leverage it to generate a specific parser. In addition to that, it also offers other functionality. For example, we can write a Query using S-expression, which will do the pattern matching on the AST. Before we delve into this feature, let’s talk about some backgrounds

Info
If you once wrote Lisp (or its dialects, such as Scheme and Racket), you should be familiar with the S-expression

We can define a S-expression recursively, the definition is

Learn to Use @dataclass in Python

I like Python’s tuple, which allows us to quickly bundle together values of different types as a single entity and manage them in an intuitive and easy-to-use way. However, I find that once the tuple has many fields, I’m forced to add a comment to indicate the meaning of each field. For example,

Use Github Actions to automate Hugo blog deployment

Recently I started learning the GitHub Actions, a feature provided by GitHub that can be used to automate a series of steps. In the process of software development, the most common use of this feature may be the building process. For static-typed programming languages such as C/C++, we are usually required to write the build scripts. The build process involves environment preparation, dependencies download, and build execution. However, automating software builds with GitHub Actions is not the focus of this post. As I was learning this feature, I thought about how I could put it into practice and realized I could use it to automate the build and deployment of my Hugo blog :)

Tail call and Tail-call Optimization (TCO)

Assume that function A calls function B. We call function A Caller and function B Callee.

The tail call refers to when the Caller only needs to wait for the return value of the Callee, as everything else has already been completed1.

If this is a recursive function call(e.g. the Caller and Callee are the same function), then the function is said to be tail-recursive.

Learn to use text-object in Vim&Neovim

You probably do not know what the text-object is in Vim/Neovim. However, you may use it in your daily life. For instance, When you are writing code, you may want to change the arguments of a function. Take the following code as an example, let’s say you want to change the function call to bar(3, 2, 1), and the cursor currently stays on the ,

LLM inference optimization - KV Cache

The secret behind LLM is that it will generate tokens one by one based on all the previous tokens.

Let’s assume that we have already generated $t$ tokens, denoted by $x_{1:t}$. In the next iteration, the LLM will generate $x_{1:t+1}$. Note that the first $t$ tokens are the same.

$$x_{1:t+1}=\text{LLM}(x_{1:t})$$

The next iteration is similar.

$$x_{1:t+2}=\text{LLM}(x_{1:t+1})$$

In summary, in each iteration, we will use the output of the previous round as a new input for the LLM. Generally, this process will continue until the output reaches the maximum length we predefined or the LLM itself generates a special token, signifying the completion of the generating process.

LoRA fine-tuning

Since the era of LLM(large language model) arrived, fine-tuning LLM has become a challenge because the LLM models are extremely large, making it difficult to perform full fine-tuning. There are mainly two approaches: freeze the entire LLM and perform prompt tuning or In-context Learning; freeze the entire LLM but inserting trainable modules. Today, I will introduce the LoRA(Low-Rank Adaptation), which corresponds to the latter technical approach. This is a work proposed by the Microsoft team1

The next lexicographical permutation problem

Occasionally, you may want to get the next/prev lexicographical permutation of a sequence. How would you do that? If you are a C++ programmer, you are probably familiar with the next_permutation1 and prev_permutation2 APIs. However, Python does not provide the counterparts. So the topic today is how to do this in Python. Since the solutions of prev lexicographical permutation and the next lexicographical permutation are very similar, let us focus on the next lexicographical permutation problem.

BPE Tokenization Demystified: Implementation and Examples

In NLP, one crux of problems is - how to tokenize the text. There are three methods available:

  • Char-level
  • Word-level
  • Subword-level

Let’s talk about the Char-level tokenizer. That is, we tokenize the text into a char stream. For instance, highest -> h, i, g, h, e, s, t. One advantage of the Char-level tokenizer is that the size of Vocab won’t be that large. The size of Vocab is equal to the size of the alphabet. So you probably won’t meet the infamous Out-of-vocabulary(OOV) problem. However, the downside is that the char itself does not convey too much information, and we will get too many tokens after tokenizing. Try to imagine that a simple word highest will give us 7 tokens😨

TF-IDF model

In previous post, we talked about the bag-of-word model, which has many limitations. Today we take a step further to see if we can try to fix one of the limitations - Each word has the same importance.

💡 The crux of the problem - How to define the word importance?

One idea is: The more frequently a word appears within a single document, the more important it is for that document. For instance, in an article discussing dogs, the word “dog” is likely to appear frequently, reflecting the document’s main topic.

Bag-of-Word model

In NLP, we need to represent each document as a vector because machine learning can only accept input as numbers. That is, we want to find a magic function that: $$ f(\text{document}) = vector $$

Today’s topic is bag-of-word(BoW) model, which can transform a document into a vector representation.

💡 Although the BoW model is outdated in 2023, I still encourage you to learn from the history and think about some essential problems: