- cross-posted to:
- programmer_humor@programming.dev
- cross-posted to:
- programmer_humor@programming.dev
Took me 2 hours to find out why the final output of a neural network was a bunch of NaN. This is always very annoying but I can’t really complain, it make sense. Just sucks.
I hope it was garlic NaN at least.
I guess you can always just add an
assert not data.isna().any()
in strategic locationsThat could be a nice way. Sadly it was in a C++ code base (using tensorflow). Therefore no such nice things (would be slow too). I skill-issued myself thinking a struct would be 0 -initialized but
MyStruct input;
would not whileMyStruct input {};
will (that was the fix). Long story.I too have forgotten to memset my structs in c++ tensorflow after prototyping in python.
If you use the GNU libc the
feenableexcept
function, which you can use to enable certain floating point exceptions, could be useful to catch unexpected/unwanted NaNs
Fucking over-dramatic divisions by 0, sigh.
Thanks. This is great
“Bounds checking, mobof–ker! Do you speak it?”
this is just like in regular math too. not being a number is just so fun that nobody wants to go back to being a number once they get a taste of it