You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# :warning: Notes on differences with the original repo
89
89
90
-
* The learning rate decay in the original repo is **by step**, which means it decreases every step, here I use learning rate decay **by epoch**, which means it changes only at the end of 1 epoch.
91
-
* The validation image for LLFF dataset is chosen as the most centered image here, whereas the original repo chooses every 8th image.
92
-
* The rendering spiral path is slightly different from the original repo (I use approximate values to simplify the code).
90
+
* Network structure ([nerf.py](models/nerf.py)):
91
+
* My base MLP uses 8 layers of 256 units as the original NeRF, while NeRF-W uses **512** units each.
92
+
* My static head uses 1 layer as the original NeRF, while NeRF-W uses **4** layers.
93
+
* I use **softplus** activation for sigma (reason explained [here](https://github.com/bmild/nerf/issues/29#issuecomment-765335765)) while NeRF-W uses **relu**.
94
+
95
+
* Training hyperparameters
96
+
* I find larger `beta_min` achieves better result, so my default `beta_min` is `0.1` instead of `0.03` in the paper.
97
+
* I add 3 to `beta_loss` (equation 13) to make it positive empirically.
98
+
99
+
* Evalutaion
100
+
* The evaluation metric is computed on the **test** set, while NeRF evaluates on val and test combined.
0 commit comments