The common advice of scaling the batch size as you scale the number of workers for training your Deep Learning model doesn’t hold when your dataset is not large. Worse than that, it might well be the case your model will diverge when using such updates. Read on for a potentially better, guaranteed to converge alternative.
I’ve been (re)reading the second edition of Rich Sutton’s Introduction to Reinforcement Learning, and I’ve decided to do the programming exercises. (Previously, I followed Fermat’s tradition of solving exercises on the margin.)
Exercise 2.5 is nice. It asks us to demonstrate the difficulties that …
Had forgotten to post this here. Enjoy! (YouTube video.)
JupyterHub supports deployment behind a reverse proxy and even instructs how to do so behindnginx in its manual. I was unable to find documentation about how to do this serving the content from a reverse proxy subdirectory.
Since there’s been a long time since I worked as a sysadmin …
I just released version 0.2.0 of my process-shared reader-writer lock.
This release’s highlight is the awesome work did by Marcos Assunção to add support for Windows.
We also changed the API so that now you can use the lock by calling
prwlock import RWLock instead of …
Recently I noticed some calls I was making to a certain C API were returning
pointers that I thought were invalid. A quick inspection with
my fears. A particular interaction showed:
(gdb) print *job->someMember Cannot access memory at address 0xec00000005
Unfortunately I don’t have access to …