Serious answer: Posits seem cool, like they do most of what floats do, but better (in a given amount of space). I think supporting them in hardware would be awesome, but of course there’s a chicken and egg problem there with supporting them in programming languages.
Posits aside, that page had one of the best, clearest explanations of how floating point works that I’ve ever read. The authors of my college textbooks could have learned a thing or two about clarity from this writer.
I had the great honour of seeing John Gustafson give a presentation about unums shortly after he first proposed posits (type III unums). The benefits over floating point arithmetic seemed incredible, and they seemed largely much more simple.
I also got to chat with him about “Gustafson’s Law”, which kinda flips Amdahl’s Law on its head. Parallel computing has long been a bit of an interest for me I was also in my last year of computer science studies then and we were covering similar subjects at the time. I found that timing to be especially amusing.
No real use you say? How would they engineer boats without floats?
Just invert a sink.
Just build submarines, smh my head.
I know this is in jest, but if 0.1+0.2!=0.3 hasn’t caught you out at least once, then you haven’t even done any programming.
IMO they should just remove the equality operator on floats.
Me making my first calculator in c.
what if i add more =
That should really be written as the gamma function, because factorial is only defined for members of Z. /s
Based and precision pilled.
As a programmer who grew up without a FPU (Archimedes/Acorn), I have never liked float. But I thought this war had been lost a long time ago. Floats are everywhere. I’ve not done graphics for a bit, but I never saw a graphics card that took any form of fixed point. All geometry you load in is in floats. The shaders all work in floats.
Briefly ARM MCU work was non-float, but loads of those have float support now.
I mean you can tell good low level programmers because of how they feel about floats. But the battle does seam lost. There is lots of bit of technology that has taken turns I don’t like. Sometimes the market/bazaar has spoken and it’s wrong, but you still have to grudgingly go with it or everything is too difficult.
But if you throw an FPU in water, does it not sink?
It’s all lies.
all work in floats
We even have
float16 / float8
now for low-accuracy hi-throughput work.Even float4. You get +/- 0, 0.5, 1, 1.5, 2, 3, Inf, and two values for NaN.
Come to think of it, the idea of -NaN tickles me a bit. “It’s not a number, but it’s a negative not a number”.
I think you got that wrong, you got +Inf, -Inf and two NaNs, but they’re both just NaN. As you wrote signed NaN makes no sense, though technically speaking they still have a sign bit.
Right, there’s no -NaN. There are two different values of NaN. Which is why I tried to separate that clause, but maybe it wasn’t clear enough.
IMO, floats model real observations.
And since there is no precision in nature, there shouldn’t be precision in floats either.
So their odd behavior is actually entirely justified. This is why I can accept them.
I just gave up fighting. There is no system that is going to both fast and infinitely precision.
So long ago I worked in a game middleware company. One of the most common problems was skinning in local space vs global space. We kept having customers try and have global skinning and massive worlds, then upset by geometry distortion when miles away from the origin.
How do y’all solve that, out of curiosity?
I’m a hobbyist game dev and when I was playing with large map generation I ended up breaking the world into a hierarchy of map sections. Tiles in a chunk were locally mapped using floats within comfortable boundaries. But when addressing portions of the map, my global coordinates included the chunk coords as an extra pair.
So an object’s location in the 2D world map might be ((122, 45), (12.522, 66.992)), where the first elements are the map chunk location and the last two are the precise “offset” coordinates within that chunk.
It wasn’t the most elegant to work with, but I was still able to generate an essentially limitless map without floating point errors poking holes in my tiling.
I’ve always been curious how that gets done in real game dev though. if you don’t mind sharing, I’d love to learn!
That’s pretty neat. Game streaming isn’t that different. It basically loads the adjacent scene blocks ready for you to wonder in that direction. Some load in LOD (Level Of Detail) versions of the scene blocks so you can see into the distance. The further away, the lower the LOD of course. Also, you shouldn’t really keep the same origin, or you will hit the distort geometry issue. Have the origin as the centre of tha current block.
I’d have to boulder check, but I think old handheld consoles like the Gameboy or the DS use fixed-point.
I’m pretty sure they do, but the key word there is “old”.
Floats make a lot of math way simpler, especially for audio, but then you run into the occasional NaN error.
On the PS3 cell processor vector units, any NaN meant zero. Makes life easier if there is errors in the data.
Floats are only great if you deal with numbers that have no needs for precision and accuracy. Want to calculate the F cost of an a* node? Floats are good enough.
But every time I need to get any kind of accuracy, I go straight for actual decimal numbers. Unless you are in extreme scenarios, you can afford the extra 64 to 256 bits in your memory
I have been thinking that maybe modern programming languages should move away from supporting IEEE 754 all within one data type.
Like, we’ve figured out that having a
null
value for everything always is a terrible idea. Instead, we’ve started encoding potential absence into our type system withOption
orResult
types, which also encourages dealing with such absence at the edges of our program, where it should be done.Well,
NaN
isnull
all over again. Instead, we could make the division operator an associated function which returns aResult<f64>
and disallowf64
from ever beingNaN
.My main concern is interop with the outside world. So, I guess, there would still need to be a IEEE 754 compliant data type. But we could call it
ieee_754_f64
to really get on the nerves of anyone wanting to use it when it’s not strictly necessary.Well, and my secondary concern, which is that AI models would still want to just calculate with tons of floats, without error-handling at every intermediate step, even if it sometimes means that the end result is a shitty vector of
NaN
s, that would be supported with that, too.I agree with moving away from
float
s but I have a far simpler proposal… just use a struct of two integers - a value and an offset. If you want to make it an IEEE standard where the offset is a four bit signed value and the value is just a 28 or 60 bit regular old integer then sure - but I can count the number of times I used floats on one hand and I can count the number of times I wouldn’t have been better off just using two integers on -0 hands.Floats specifically solve the issue of how to store a ln absurdly large range of values in an extremely modest amount of space - that’s not a problem we need to generalize a solution for. In most cases having values up to the million magnitude with three decimals of precision is good enough. Generally speaking when you do float arithmetic your numbers will be with an order of magnitude or two… most people aren’t adding the length of the universe in seconds to the width of an atom in meters… and if they are floats don’t work anyways.
I think the concept of having a fractionally defined value with a magnitude offset was just deeply flawed from the get-go - we need some way to deal with decimal values on computers but expressing those values as fractions is needlessly imprecise.
While I get your proposal, I’d think this would make dealing with float hell. Do you really want to
.unwrap()
every time you deal with it? Surely not.One thing that would be great, is that the
/
operator could work betweenResult
andf64
, as well as betweenResult
andResult
. Would be like doing a.map(|left| left / right)
operation.Well, not every time. Only if I do a division or get an
ieee_754_f64
from the outside world. That doesn’t happen terribly often in the applications I’ve worked on.And if it does go wrong, I do want it to explode right then and there. Worst case would be, if it writes random
NaN
s into some database and no one knows where they came from.As for your suggestion with the slash accepting
Result
s, yeah, that could resolve some pain, but I’ve rarely seen multiple divisions being necessary back-to-back and I don’t want people passing around aResult<f64>
in the codebase. Then you can’t see where it went wrong anymore either.
So, personally, I wouldn’t put that division operator into the stdlib, but having it available as a library, if someone needs it, would be cool, yeah.
Nan isn’t like null at all. It doesn’t mean there isn’t anything. It means the result of the operation is not a number that can be represented.
The only option is that operations that would result in nan are errors. Which doesn’t seem like a great solution.
Well, that is what I meant. That
NaN
is effectively an error state. It’s only likenull
in that any float can be in this error state, because you can’t rule out this error state via the type system.Why do you feel like it’s not a great solution to make
NaN
an explicit error?Theres plenty of cases where I would like to do some large calculation that can potentially give a NaN at many intermediate steps. I prefer to check for the NaN at the end of the calculation, rather than have a bunch of checks in every intermediate step.
How I handle the failed calculation is rarely dependent on which intermediate step gave a NaN.
This feels like people want to take away a tool that makes development in the engineering world a whole lot easier because “null bad”, or because they can’t see the use of multiplying 1e27 with 1e-30.
Well, I’m not saying that I want to take tools away. I’m explicitly saying that a
ieee_754_f64
type could exist. I just want it to be named annoyingly, so anyone who doesn’t know why they should use it, will avoid it.If you chain a whole bunch of calculations where you don’t care for
NaN
, that’s also perfectly unproblematic. I just think, it would be helpful to:- Nudge people towards doing a
NaN
check after such a chain of calculations, because it can be a real pain, if you don’t do it. - Document in the type system that this check has already taken place. If you know that a float can’t be
NaN
, then you have guarantees that, for example, addition will never produce aNaN
. It allows you to remove some of the defensive checks, you might have felt the need to perform on parameters.
Special cases are allowed to exist and shouldn’t be made noticeably more annoying. I just want it to not be the default, because it’s more dangerous and in the average applications, lots of floats are just passed through, so it would make sense to block
NaN
s right away.What do you do about a dataset which contains 11999 fine numbers, but one of them is NaN because George called in sick that week? Throw away the whole dataset because it doesn’t fit the data type?
- Nudge people towards doing a
idk if you ever had to actually work with floats,
but in statistics, you deal with NaNs all the time. Data is absent from the data set. If it would be an error every time, you wouldn’t get anything done.
It doesn’t have to “error” if the result case is offered and handled.
Float processing is at the hardware level. It needs a way to signal when an unrepresented value would be returned.
My thinking is that a call to the safe division method would check after the division, whether the result is a
NaN
. And if it is, then it returns an Error-value, which you can handle.Obviously, you could do the same with a
NaN
by just throwing an if-else after any division statement, but I would like to enforce it in the type system that this check is done.I feel like that’s adding overhead to every operation to catch the few operations that could result in a nan.
But I guess you could provide alternative safe versions of float operations to account for this. Which may be what you meant thinking about it lol
I would want the safe version to be the default, but yeah, both should exist. 🙃
Float is bloat!
While we’re at it, what the hell is -0 and how does it differ from 0?
It’s the negative version
So it’s just like 0 but with an evil goatee?
Look at the graph of y=tan(x)+ⲡ/2
-0 and +0 are completely different.
For integers it really doesn’t exist. An algorithm for multiplying an integer with -1 is: Invert all bits and add 1 to the right-most bit. You can do that for 0 of course, it won’t hurt.
Call me when you found a way to encode transcendental numbers.
Perhaps you can encode them as computation (i.e. a function of arbitrary precision)
Hard to do as those functions are often limits and need infinite function applications. I’m telling you, math.PI is a finite lie!
Do we even have a good way of encoding them in real life without computers?
Just think about them real hard
\pi
Here you go
ⲡ
May I propose a dedicated circuit (analog because you can only ever approximate their value) that stores and returns transcendental/irrational numbers exclusively? We can just assume they’re going to be whatever value we need whenever we need them.
Wouldn’t noise in the circuit mean it’d only be reliable to certain level of precision, anyway?
I mean, every irrational number used in computation is reliable to a certain level of precision. Just because the current (heh) methods aren’t precise enough doesn’t mean they’ll never be.
You can always increase the precision of a computation, analog signals are limited by quantum physics.
From time to time I see this pattern in memes, but what is the original meme / situation?
It’s my favourite format. I think the original was ‘stop doing math’
Thank you 😁
math are numbers and therefore non-physical, and therefore esoterical, so stop giving it credit.
/s
Out of topic but how does one get a profile pic on lemmy? Also love you ken.
you can configure it in the web interface. just go to your profile
Thank you!
Go to “Settings” (cog wheel) and then “Avatar”:
deleted by creator
That doesn’t really answer the question, which is about the origins of the meme templete
Yikes. placed this in the wrong spot. Thank you.
There are probably a lot of scientific applications (e.g. statistics, audio, 3D graphics) where exponential notation is the norm and there’s an understanding about precision and significant digits/bits. It’s a space where fixed-point would absolutely destroy performance, because you’d need as many bits as required to store your largest terms. Yes, NaN and negative zero are utter disasters in the corners of the IEEE spec, but so is trying to do math with 256bit integers.
For a practical explanation about how stark a difference this is, the PlayStation (one) uses an integer z-buffer (“fixed point”). This is responsible for the vertex popping/warping that the platform is known for. Floating-point z-buffers became the norm almost immediately after the console’s launch, and we’ve used them ever since.
While it’s true the PS1 couldn’t do floating point math, it did NOT have a z-buffer at all.
What’s the problem with -0?
It conceptually makes sense for to negativ values to close to 0 to be represented as -0.
In practice I have never seen a problem with -0.On NaN: While its use cases can nowadays be replaced with language constructs like result types, it was created before exceptions or sum types. The way it propagates kind of mirrors Haskells monadic
Maybe
.
We should be demanding more and better wrapper types from our language/standard library designers.
I’m like, it’s that code on the right what I think it is? And it is! I’m so happy now
Precision piled.
The meme is right for once