This is where I collect and write things, mostly on academe and artificial intelligence. Established in 2010, by Vaishak Belle.
Some of the writing in ML comes off as so dry - ABC is related work, we improve on the architecture of X by adding Y layers, and we now show that we improve on Z’s results by P%.
Where’s the fire and passion and drive, folks? Or is adding poetry to hyper-parameter tuning reports just hyper-lame?