如何记忆红黑树的操作

如果你点进了这一篇文章,相信你也跟我一样:红黑树学一次忘一次,又要做树的旋转,又要给节点重新上色,导致每次都是学完了就忘记。我也曾经仔细阅读过 CLRS 写的《算法导论》,但是上面的分类讨论只是让我更加头疼

当然,记住一项东西的最佳方式永远都是理解它,而最近看到斯坦福的 CS166 高级数据结构课程的课件的时候,我似乎理解了红黑树————红黑树和 2-3-4 树是等同(isometry)的数据结构,他们只是用了不同的方式表示 2-3-4 树1,这意味着我们可以通过 2-3-4 树上的节点变化,进而推导出红黑树上的形状和颜色的变化,而 2-3-4 树的节点变化,是简单很多的

Git Bundle 指南

git bundle 是一个比较少看到的 git 命令,它的作用是把一个 git 仓库打包📦成一个文件,然后别人可以通过这个文件还原出本来的 git 仓库,而且 git bundle 还支持增量更新功能。在知道 git bundle 命令之前,我有时候打包一个 git 仓库一般就直接 tar czf some_git_repo。前阵子偶然发现了 git bundle 发现还挺实用的🍻

用 MPNN 框架解读 GAT

Justin Gilmer 提出了 MPNN(Message Passing Neural Network)框架1 ,用于描述被用来做图上的监督学习的图神经网络模型。我发现这是一个很好用的框架,可以很好理解不同的 GNN 模型是如何工作的,方便快速弄清楚不同的 GNN 模型之间的差别。我们考虑图 $G$ 上的一个节点 $v$,它的向量表示 $h_v$ 的更新方式如下: $$m_v^{t+1}=\sum_{u\in \mathcal{N}(v)}M_t(h_v^t,h_u^t,e_{vu})$$ $$h_v^{t+1}=U_t(h_v^t,m_v^{t+1})$$

SICP 练习 2.27

Modify your reverse procedure of exercise 2.18 to produce a deep-reverse procedure that taks a list as argument and returns as its value the list with its elements reversed and with all sublists deep-reversed as well.

SICP 练习 1.46

Several of the numerical methods described in this chapter are instances of an extremely general computational strategy known as iterative improvement. Iterative improvement says that, to compute something, we start with an initial guess for the answer, test if the guess is good enough, and otherwise improve the guess and continue the process using the improved guess as the new guess. Write a procedure iterative-improve that takes two procedures as arguments: a method for telling whether a guess is good enough and a method for improving a guess. Iterative-improve should return as its value a procedure that takes a guess as argument and keeps improving the guess until it is good enough. Rewrite the sqrt procedure of section 1.1.7 and the fixed-point procedure of section 1.3.3 in terms of iterative-improve.