Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why asynchronous Rust doesn't work #10

Open
utterances-bot opened this issue Mar 10, 2021 · 48 comments
Open

Why asynchronous Rust doesn't work #10

utterances-bot opened this issue Mar 10, 2021 · 48 comments

Comments

@utterances-bot
Copy link

Why asynchronous Rust doesn't work

In 2017, I said that “asynchronous Rust programming is a disaster and a mess”. In 2021a lot more of the Rust ecosystem has become asynchronous – such that it...

https://theta.eu.org/2021/03/08/async-rust-2.html

Copy link

HadrienG2 commented Mar 10, 2021

As you point out, a lot of the ergonomics issues of closures in Rust appear when they are long-lived (stored in structs, sent to other threads, etc), because now you have to think hard about all the lifetime calculus that the compiler would elide or otherwise handle for you in simple callback cases.

For multi-threaded work, this means that closures can be made a lot more ergonomic when abiding with a structured concurrency paradigm where no concurrent task is allowed to evade the current scope. See scoped threads in rayon and crossbeam if you are not familiar with the concept. The much-loved rayon parallel iterators are actually just a special case of it.

I wonder what currently prevents the async ecosystem from producing a good 99% solution that is as ergonomic as scoped threads are for threads. I suspect the need for long-lived framework-ish infrastructures (event loops, IO backends, etc) might be it, but am not sure.

Copy link

bocc commented Mar 10, 2021

I am also troubled by the fact that two closures with identical signatures are not considered identical by the type checker, so you can't eg. return closures from different match arms. (Although I'm sure there is a reason for this which I am unaware of.)

Copy link

koraa commented Mar 10, 2021

I pretty much agree with all your points: Async programming as it is today in rust is a mess and the lack of good support for closures is a sore point. Lot's of the paradigms in rust are borrowed from functional languages and being able to use a more functional style is one of the biggest advantages of rust for me…it is a sorry state of affairs that we are not able to use a functional style when coding in rust.

I would like to add the following: I've always looked at Rust as sort of the Breaking 2.0 Version of C++ twenty years later and as such benefits immensely from the mistakes that where made in C++. Templates/Generics, RAII, Move semantics, zero-cost-abstractions, smart pointers where all first tried in C++ and sometimes to questionable results. Rusts answers to the problems (use haskell style type classes, use RAII with a normal function as constructor, move by default, just use zero cost abstractions) are all the direct result of lessons learned in C++.

I think these mistakes had to be made at some point to make the innovations possible; doing async properly, as a zero cost abstraction, in the absence of garbage collection, in the absence of lazyness is something no other language truly attempted. Like javascript, we're going for syntax sugar without the typing depth a proper monadic system like in haskell provides. With the added difficulty of doing memory management explicitly…

This is a mistake, but since neither you nor I can provide the experience of what actually works in this regard, it's a mistake that needs to be made for a Rust 3.0 to do it properly.

Copy link

@bocc The reason for that is that if a closure captures different variables than another closure, its size changes. Eg. if a closure captures two variables x: i32 and y: i16, its size is going to be different than a closure that captures f: std::fs::File. You'd have to use a dyn T in this case.

@HadrienG2
Copy link

I am also troubled by the fact that two closures with identical signatures are not considered identical by the type checker, so you can't eg. return closures from different match arms. (Although I'm sure there is a reason for this which I am unaware of.)

Something like this has actually been proposed in the early days of futures and impl Trait, where people wanted to be able to return two different kinds of concrete types from a function returning impl Trait. But the Rust designers were uneasy about it at the time because it could not be made zero-cost, which means that making it too transparent would create a performance footgun.

To see why, remember that the difference between a function and a closure is that a closure can store state (references to variables in the surrounding scope, or moved/copied versions thereof). At the implementation level, it is like a struct holding that captured state. Every such struct is potentially unique because every closure captures potentially different state, so to store a closure, you need a variable whose type is specific to that closure.

If we wanted to store N different kinds of closures in a single variable, then we would need a type which can store N different kinds of concrete structs, i.e. an enum or a (boxed) trait object. Both of these abstractions have nontrivial overhead. Enums require discriminant checking and have a space overhead that's the maximum of the N stored types, whereas trait objects require memory allocation (in current Rust, there is work to relax this in selected cases in the future) and vtable indirection.

I personally think that given sufficiently loud syntax, this would still be accepted and become a nice language feature. But as usual, someone needs to put in the work of designing that loud syntax and resolving all the unforeseen edge cases that don't come up when one is just toying with a concept.

@HadrienG2
Copy link

HadrienG2 commented Mar 10, 2021

@koraa While I very much agree with your general sentiment, I would just like to bounce back on a specific point...

Like javascript, we're going for syntax sugar without the typing depth a proper monadic system like in haskell provides.

Several prominent Rust designers have stated in the past that a monadic system designed like that of Haskell wouldn't work in Rust in their opinion. The pain points usually mentioned are...

  • Without further constraints, higher-kinded types like Monad make type inference undecidable. The constraints used by Haskell, based on currying, would feel out of place in Rust, where currying is not commonly used.
  • There are difficult interactions with the ownership, borrowing and lifetime system, common pain points being handling multiple kinds of references and (in the specific case of async and generators) correctly handling borrows across suspend and resume points without introducing avenues for memory unsafety.
  • A monadic design makes it difficult to handle complex control flow, such as early returns or general loops. This is not usually a problem for functional programming languages, where such control flow is rare as it is almost always done for the purpose of mutating data or selectively avoiding side effects. But since Rust is a lot friendlier towards mutation, it feels more limiting there.
  • There are a lot of API problems that Haskell solves with their local equivalent of a tree of boxed trait objects, rather than trying to produce maximally efficient state machine types, and this would feel like an uncomfortable amount of overhead in a performance-focused language like Rust.

Two particularly interesting recaps to read are:

To be clear, I don't think this invalidates your overall point that there might be benefits to a more general-purpose abstraction here, but I think there's a reasonable amount of evidence that the designs that work for Haskell cannot be applied as-is to an imperative language like Rust or Javascript, especially when performance is a concern. The "clean" solution to async/await like problems in those languages may eventually turn out to be something completely different like some sort of generalized coroutine abstraction.

Copy link

DBJDBJ commented Mar 10, 2021

"... async has destroyed rust as a viable language for embedded, because trying to hide alot of the design has led to a fat runtime..."

That had precisely "destroyed" Rust as a viable language for system programing too (hint:OS).

Copy link

kazimuth commented Mar 10, 2021

e: on further reflection this article is pointless hacker news bait and i don't feel like participating here. Hope everybody enjoys the ongoing plagues and deadly climate events.

Copy link

jgarvin commented Mar 10, 2021

@aep I'm confused by the claim Rust has a fat runtime because Rust has no runtime. You can use async on embedded just fine? This enabled async for no_std.

@aep
Copy link

aep commented Mar 10, 2021

sorry, no energy for that, good luck with your embedded endeavors.

Copy link

the article mentions one of the problems caused by rust's general design, but I think one thing that everybody seems to gloss over, is the high level picture problem. programming languages exist as a tool to allow a human to convey a logic progression of steps to a microprocessor that can run them in the best (easiest, fastest, etc, what have you) way possible. If that were not true, we'd all be writing hexcodes in native machine language for our favorite processor. The language is supposed to be a tool for the human to make their job easier. And in my opinion, this, is where rust falls down. It tries to solve hard problems like dangling pointers but in trade has made simple things so hard, that it seems to me at least, counter productive. It would make more sense to write a much simpler language, and write tons of static analyzers to try and find places where you've gone wrong. By enforcing rules that you wouldn't normally need to follow just in case it might cause you to break the dangling pointer rule, is counter productive. async is just another example of that.

Copy link

MrTact commented Mar 10, 2021

It would make more sense to write a much simpler language, and write tons of static analyzers to try and find places where you've gone wrong.

You are literally describing how the Rust compiler works.

By enforcing rules that you wouldn't normally need to follow just in case it might cause you to break the dangling pointer rule, is counter productive.

Huh? If you have a pointer that you might not own, then you have a pointer you might not own! If the compiler allows you to manipulate it, then you are opening the door to derefing a null ptr at runtime. There's no way around this! (Or perhaps it's better to say, no one has yet come up with a better zero-cost solution than ownership.)

@nixomose
Copy link

Huh? If you have a pointer that you might not own, then you have a pointer you might not own! If the compiler allows you to manipulate it, then you are opening the door to derefing a null ptr at runtime. There's no way around this! (Or perhaps it's better to say, no one has yet come up with a better zero-cost solution than ownership.)

I didn't explain myself well. The point I was trying to get across was that the rules exist to solve a problem. but sometimes the rules force you to do things that you wouldn't need to do and still have safe code, but in order to make it possible to ensure that the compiler KNOWS its safe, it has to enforce these rules.
that's not a great explanation either, lemme try again. you can write a perfectly safe function that would not pass rust's rules because the rules exist to enforce a particular thing, and it drags other things down with it.

the problem being it creates a more restrictive environment than needs to exist to avoid problems, because it also keeps you from doing perfectly valid and safe things, just so it can be sure it's safe.
The ability to write code that does something useful that maybe the compiler can't prove is safe but actually is, is part of why humans are writing software and computers are not.

@HadrienG2
Copy link

HadrienG2 commented Mar 11, 2021

@nixomose The static analysis false positive problem that you describe is the reason why Rust has unsafe features.

I think (but correct me if I'm wrong) that your point of disagreement with the language design here is how often unsafe blocks should be needed. Rust is designed under the assumption that after a period of cognitive training, you should reach the point where unsafe features are rarely needed in your everyday programming and you can do most of your work with the safe subset.

If you do reach this point, then the current Rust design is beneficial, because it greatly limits the codebase portion that you need to audit by hand in order to ensure safety properties, which means that when working on your code, you can usually free your mind from such concerns and focus on more important things.

If, on the other hand, you do not reach this point where unsafe is rarely needed, then the current Rust design will be frustrating, because it does not try hard to make unsafe ergonomic. To the contrary, it tries to make it a bit clumsy so that you avoid using it when not absolutely necessary, to help in the aforementioned training process.

For many people, it seems the tradeoff becomes beneficial after a couple of weeks/months of Rust use, but I can totally imagine that there are also other people for whom it won't ever work out.

@DBJDBJ
Copy link

DBJDBJ commented Mar 11, 2021

"...Rust is designed under the assumption that after a period of cognitive training, you should reach the point ..."

  1. Let us imagine there is a set of most useful programming languages.
  2. Rust is not the language that most people will understand from that set.

The same applies to standard C++, Haskell, Lisp, etc ... that is a second set.

The third set is the first set minus the second set.

Feasible software development uses the languages from the third set. Whatever we (here) might think of them. Hint: C, zig, Swift, GO. And yes there are "things" like C#, Java, PHP, JavaScript, et all.

Increase the distance until you see the full picture.

@nixomose
Copy link

I think (but correct me if I'm wrong) that your point of disagreement with the language design here is how often unsafe blocks should be needed.

yeah, that's not my point. I actually have rarely ever used any unsafe anywhere.

my point is closer to what you said later

For many people, it seems the tradeoff becomes beneficial after a couple of weeks/months of Rust use, but I can totally imagine that there are also other people for whom it won't ever work out.

the amount of time rust makes things harder than they need to be (so it can ensure safety), the amount of time it spends getting in my way of trying to accomplish a task, and all the extra complexity involved in doing simple things, to me, make it not worth the tradeoff for what little safety you get, relative to all the things that can go wrong.

A programming language is a tool, it should help me accomplish my task, not get in the way. If a hammer hit me back every time I hit a nail with it, I wouldn't use hammers. I don't use rust, I use java. It gets out of my way and there's better tooling for it than anything else I've seen. And where java doesn't cut it, I use C or C++.

Since we're on the topic, the other big fallacy I see is all the safety you supposedly get. I've been programming computers for 40 years. I learned about memory management when I was programming in assembly and C. Maybe rust is for people who started with high level languages and want to work their way down and never gained strong skills in understanding exactly what's going on at every stage of your program in regards to memory management. But when you know all that, rust gets in your way. And all the memory safety in the world doesn't solve most of the problems you can run into when programming. I still have to worry about logic problems, race conditions, and handling unexpected errors. Rust just makes all that harder because it keeps getting in my way, so it can save me from a memory leak. Not worth it.

I've been doing rust for about a year, I've done lots of async, lots of async moving into closures lots of not calling functions because rust makes it so unnecessarily hard... maybe things get better, but not from what I've seen.

Copy link

Since we're on the topic, the other big fallacy I see is all the safety you supposedly get.
@nixomose This fallacy is more of a misunderstanding.

Let's take building as an example. Houses were made for thousands of years without complaint, because 99% of the time they worked fine and when they got old enough to be a danger, it was obvious. 0.99% of the time, they would fail in "safe" ways where a small piece of the building moved a few inches, or a wall fell down, or the ceiling sagged in a really obviously unsafe way.

But 0.01% of the time, they fell down. They killed people. Lots of people, entire families at a time. Survival rate was decent for one-story buildings, but as we built higher, survival rate decreased. And these 0.01% weren't designed by bad architects; some of the largest architectural disasters happened with structures with experienced and talented architects.[1]

Building codes and building inspections are the response to that. Every structure has to be built in a certain way, with some restrictions, and has to be stronger than its intended usage. The driving force behind the change was to eliminate the 0.01% case. The rules apply regardless of how experienced the architect or engineer is.

A side effect of the building codes has been overall increased building lifetime. Buildings can stand for hundreds of years without issue, which really wasn't a thing not too long ago.


99% of computer programs work fine, and when they get old enough to be a danger, they're rewritten. 0.99% of the time, programs fail in annoying or humorous ways, that affect a few people and only some of the time. The damage is small and there's plenty of time to clean up any mess and fix the issue.

But 0.01% of the time, they fail catastrophically, in a critical position in a critical system. Perhaps it was handling encryption of personal information like SSN, password, or credit card numbers. Maybe it was guarding a multinational company's computer network. Maybe it was underpinning a medical service, or some system that is depended on by thousands of people for their safety. Maybe it was the first line of defense against an attacker. As time goes on, this happens more often, and more people are affected per failure. And these 0.01% weren't designed by bad programmers; these include programs designed by teams of highly experienced developers.[2]

The borrow checker, the strict rules around ownership, and the guarantees your program is expected to uphold are the building codes. Every rust program has to be built in a certain way, with some restrictions, and it has to be stronger than its intended usage. The driving force behind the change is to eliminate the 0.01% case. The apply regardless of how experienced the designer or programmer is.


The restrictions are not arbitrary. Take a look at this list of the top 25 security vulnerability causes: https://cwe.mitre.org/top25/archive/2021/2021_cwe_top25.html Of the 25 entries on that list, 7[3] of them are eliminated entirely by just using Rust instead of another language; that's without considering Rust's enforcement of API constraints through type its type system that so many Rust libraries implement.

Real programs written by experienced computer programmers and reviewed by dozens of people can have these problems. And the software that commonly performs these critical functions is performance-sensitive, so garbage collected languages and scripting languages are not options (though java has made substantial progress anyway).

[1] https://en.wikipedia.org/wiki/Tay_Bridge_disaster
[2] https://heartbleed.com/
[3] 1, 3, 7, 8, 12, 15, 17

@nixomose
Copy link

I take your point from your analogy, although some bits don't quite make sense to me exactly.

I make a distinction between a software failure that exposes some personal information (which can be insured against) and a building collapsing that kills people (that can't be protected against.) Yes there are software failures that cause medical equipment to kill people, and it is possible rust would have avoided some of those. But I seem to recall one, that was caused a race condition. Some machine irradiated a patient because of the unique way the operator used the controls causing this problem. Rust won't fix that.
It is also possible that the use of rust (lack of entrenchment, lack of of standard, big learning curve, and it's own problems making it impossible to debug rust-lang/rust#72196 ) cause some medical equipment product never to make it to market in the first place, thus being unable to save lives. You could argue that both ways

My point about rust is that it doesn't solve the problems, it just moves them. Sure you can't actually misuse memory, but you can still make a zillion other mistakes. And in my opinion it just doesn't seem worth it, it doesn't make sense the amount of extra hindering, lack of flexibility you have to deal with for the little gain you get, since you really are just moving the problem somewhere else.

So rust can remain a niche solve-the-c-c++-problem language until something else comes along that does some other kind of memory safety with less pain to the programmer. It's the current new and shiny thing, and people will get over it like everything else. My opinion only your mileage may vary.

I still believe that the computer is a tool for the human and not the other way around, and although rust has taken the argument of safety to the extreme, I have to think there are other ways to solve the problem that aren't so onerous and restrictive. There are still plenty of ways to design buildings, I seem to recall there's one made out of legos that is structurally sound.

And I guess the biggest problem that you point out is that building codes are required by everybody in the country. rust is not required by everybody, so anybody can just skirt the rules by using another language anyway.

I think we need to make a different kind of progress. Making the same languages over and over with various levels of type safety, memory safety, threading safety and standard library, and package manager and ide and ide plugins is really not helping anybody either.

The paradigm hasn't been invented yet, I don't know what it will be, but I don't think we'll be typing code into ever fancier editors, 100 years from now. Yet all we do is keep reinventing the same languages over and over. we keep writing the same string handling library for these new languages over and over.

Maybe instead of writing another rust, we should start moving towards that next thing. I haven't figured out what it is yet, but as soon as I do or somebody does, I think I'll start moving towards that.

@bocc
Copy link

bocc commented Nov 14, 2021

Some machine irradiated a patient because of the unique way the operator used the controls causing this problem. Rust won't fix that.

Are you sure about that? Rust is currently the only mainstream language with move semantics. That lets you use typestates, meaning the compiler verifies your state transitions and valid operations in a given state, fixing the exact problem that system had.

@nixomose
Copy link

I think this it what I was referring to
https://en.wikipedia.org/wiki/Therac-25

"Because of concurrent programming errors (also known as race conditions), it sometimes gave its patients radiation doses that were hundreds of times greater than normal,"

https://news.ycombinator.com/item?id=17337658

"Rust doesn't magically fix resources contention. It just provides safe concurrency primitives (and the surrounding language semantics) that make it difficult -to-impossible to misuse in such a way as to cause a deadlock or race condition."

I can't speak to the details of either. I've forgotten most of the rust I knew, and I don't know the details of the therac-25 problem.
All I'm saying is that rust enforces certain constraints that remove the possibility for certain types of problems. It is my opinion that the problems just get moved elsewhere and for all the effort you have to put in to get those safety guarantees, it's not worth it. I'm sure someday a rust program will kill somebody. I already know at least one person who quit because of it. we're well on our way. :-)

@Phlosioneer
Copy link

In that particular case Rust would have fixed it. The programmers were using a variable that they did not expect to be mutated for the duration of the operation, and yet the UI was able to mutate it, because there was no lock on it. That's the core premise behind rust's mutability and borrowing system - preventing that situation.

Different programming languages are good at different things, like tools. Rust is a tool in the toolbox. Rust has a combination of properties that make it a very good tool to have, but that doesn't mean you should always use it!

People who make games in rust are either doing it for the challenge, to learn, or for the fun of it. The idea of using rust for build scripts is horrifying to me.

...since you really are just moving the problem somewhere else.

IMHO Rust does just move the problems - but the moved problems are easier to spot than the originals. Looking at C++ code to find memory errors is incredibly difficult even when you know what you're looking for. Reviewing C++ code is consequently very difficult. Suppose that complexity was "moved" into a bunch of Rc, Box, and Arc structures to appease the borrow checker. Reviewing that code is a lot easier - the possible problem areas are highlighted and named.

The reason why memory errors and out-of-bounds errors and pointer errors are at the top of the CWE list is because they're hard to spot. Rust makes those particular errors impossible, and that increases the number of other errors, yes. But they're easier to find when reading the code, so they get caught more often.

@nixomose
Copy link

I don't know how far off topic you want to get on (or off) this subject and I expect come monday somebody's going to make a note that this is an issue thread not a ... oh it is a blog comment, okay, let's continue then...

a few things: "Suppose that complexity was "moved" into a bunch of Rc, Box, and Arc structures to appease the borrow checker."
I don't even have a problem with that part, but then you start taking a performance penalty for every little thing you do. So that's a tradeoff okay, but if I'm going to lose performance, I might as well use java. I know, java is worse, but there's something to do. I usually start my rust rants with "rust gets a lot of things right, but the few things that it gets wrong it gets wrong so badly that it makes it not worth using for me." so I'm with you on wrapping and manually unlocking things only when you need to change them, etc.

Here's where the off topic starts: " Looking at C++ code to find memory errors is incredibly difficult even when you know what you're looking for."
Looking at c++ code can be good or bad depends on what the author did. You can write c++ that looks like rust or go or basic. The problem with c++ is they won't leave it alone and they keep adding more crap to it so pretty much everybody agrees they should stop and there's no canonical c++ set of stuff to use, you should pick one set and stick with it and so on. That's a c++ problem specifically, but... if you do it right, and a whole program is written reasonably consistently, it can be not that bad to find problems. When you go out of your way to obfuscate your code (some people will just call this using modern features) yeah, it's going to be hard.
But here's what I really take issue with:
"Reviewing C++ code is consequently very difficult." I think code reviews are.... well, I won't say pointless, but how about done completely wrong. If you want to review code, you do it in a debugger. You watch it run, you see the variables change the way you expect, and you see the other variables not change as you don't expect. You'll never get that in a traditional code review. This isn't a rust or c++ thing, this is a programming peeve of mine. most people's code reviews are pointless because they're looking for style or lack of comments or gasp, indentation, when you really should be looking for logic flaws, or race conditions, etc. And in my opinion, you're never going to be as good at knowing what the computer is doing as the computer will be when it shows you what it's doing in the debugger.
you want to get rid of bugs before production, forget automated tools, forget rust, forget codings standards. Watch the computer run the software at human speed, and you will find most of the problems.

In fact most people can't be bothered, so code reviews are a joke.
Agreed rust removes the possibility of making some types of mistakes, but having a function change a mutable variable it was passed that happens to have a side effect later on, well, rust will never save you from that.

@interroobang
Copy link

Hi there. Did you try evmap? https://docs.rs/evmap/10.0.2/evmap/

Hope this helps.

Copy link

@nixomose I found that several of the things that you mentioned struck a chord with me. My background is mostly C++ (originally self taught, for the purpose of learning gamedev, before I went to college and learned it properly with a C and UNIX background) and JavaScript, although nowadays in particular, I spend a lot more time writing shell scripts and dockerfiles than either of those.

And I only have 15 years worth of experience. I've been following Rust loosely since it came onto the scene, and never really taken the full plunge in terms of sinking the effort into it enough to be able to decide if it is for me or not, but I have been noting the discussions around these hurdles which plague async in Rust, and they do give me quite some pause in terms of it being as much of a no-brainer to dive into Rust now compared to when it came out where it was even more exciting to learn about.

What struck a chord with me in what you wrote is that the programming language typically seems to stop short of meaningfully helping out its stewards (that is to say, the original author as well as its maintainer, who may not be in contact with the former) in the day to day challenges which seem to boil down to grokking execution flow.

When I look at a program, usually it is with the intention of improving its behavior, and that is hardly a sensible thing to jump into without first having an understanding of its behavior.

Although Rust's done a pretty admirable thing by banishing certain classes of access violations, the cost borne of it is perhaps just as difficult to quantify as the cost of those access violations in the first place, of course this is the "moving" of problems that's already been brought up. I couldn't possibly comment as a non-Rust coder on how materially easier it becomes and the resulting value which it brings to the table; we need to somehow weigh the cost of the added challenge of using this language against the amount of effort the reduction in memory corruption issues saves us.

In terms of where this fits, then, in understanding a program's behavior, it actually probably moves the needle very little; when we are trying to unravel a particularly convoluted execution flow, it matters not whether that was implemented in C++, Rust, Python, or JavaScript: a ball of mud is a ball of mud whether it's composed primarily of clay or silt.

You mention a different kind of progress and imagining what the next thing will look like. Because we clearly do desperately need a next thing. This is the second thing that you wrote which struck a chord with me, and is something I've been thinking about a lot lately.

It's a bit easy to get jaded with the stream of announcements of the latest attempt (number twenty seven thousand) to create another language or framework, when the fact of the matter is that a lot of the simple building blocks of computing are plenty sufficient to build a formidable edifice upon which you can accomplish all that you could ever dream of, if only those building blocks were robust and sufficiently well-behaved... as well as somehow capable of scaling when composed upon each other.

Just to be clear, these building blocks are very basic and familiar things... arrays, hash tables, threads...

Although the more experienced among us are keen to evaluate these properties of any given building block or collections thereof ("frameworks"), as it certainly is of great import to sort out the most useful/performant/flexible of such packages as our livelihoods depend on them in so many ways, what glaringly remains inescapable is this sort of portal to the abyss that unavoidably opens, as though it were a fourth law of thermodynamics... its impenetrability growing in some polynomial proportion to the number of possible states associated with it. This is a form of entropy, if you will.

From that follows the principle that the more complex you make something, the more suble issues you embed into it and the longer the timescale is to discover them (let alone resolve them). And the perhaps related principle that the more complexity a tool is able to polish away and hide, the more difficult and time-consuming an issue becomes to address when an assumption of its design is subverted.

Indeed, it should be possible to apply a great deal of resources, human and/or machine, to progress toward designing a software system which is free of flaws and it should perhaps be possible to prove that it is the case for a given program and evolve a program with a real-world application into such a robust state. Quite attractive, and all fine and good theoretically, but this falls apart in the face of practical economics for now. So we shall revisit such ambitions in a few decades' time when it has matured more and become more practical.

All of this being a long-winded way to express my opinion which is that I think a lot of ongoing effort isn't quite being directed in a way that could create the most value. My thesis is that the current state of tooling with regard to the introspection of execution flow is what's sorely lacking in this space and this is where the most significant gap exists. Since this is the area I am working in, and I have little interest in language design, the answer is probably a tool or suite of tools which are, to whatever extent possible, language-agnostic.

There is a great need for tools to help us actually understand (no doubt through the implementation of complex visual user interfaces) the way in which our threads weave execution through the fabric that is our code. The impact this has is larger than you might think, Me, personally I am motivated by having such a tool because it will allow me to be more productive and gain an understanding of a piece of software while expending much less effort, and these tools would also enable beginner coders or even non-coders to gain an understanding of some given code enough to be able to characterize the nature of something that has gone wrong, even if they might not be able to actually fix it themselves. This even stands to create a dramatic increase in available manpower as we all know how the wide the gap still is in terms of demand and supply for developers.

Something like Rust, as well-intentioned as it might be, and as critically helpful as its memory safety guarantees are for so many applications, does essentially nothing to address this need (nor did it set out to).

@jgarvin
Copy link

jgarvin commented Aug 22, 2022

@unphased I think Rust enforcing mutability XOR sharing actually makes a giant dent in the problem you're discussing. This property isn't enforced in other languages and does actually require control flow analysis.

@unphased
Copy link

unphased commented Aug 22, 2022

Whoa cool, I didn't realize the blog points to a real github thing. Neat.

Re: mutability XOR sharing, I understand you to mean that the design of the language is enforcing a clear boundary here, which I can certainly appreciate, except for how one quickly descends into sketchy territory once you need to do actual shared resources that need to mutate. Graph nodes are the textbook example here. And another one is that it seems like Rust places a lot of constraints on you that get in the way if you try to build something like a game engine. Shared mutable state is inherent in a lot of stuff. Yeah, we try to avoid it in most code since it's great at attracting bugs, but it's just a bit underwhelming to have to drop to unsafe or use another language for the most hairy bit of a project. Those are the bits you'd hope to benefit the most from the safety guarantees. So it's one of those hidden exceptions not revealed at the start: the strings attached, so to speak.

It's not a showstopping issue but it does serve to show that perfection is still not within grasp. Although workarounds exist, based on my reading, none of them can be considered elegant. Would love to be proven wrong about that though.

Anyway, I really need to talk less about Rust because I still haven't learned it properly yet!

@nixomose
Copy link

nixomose commented Aug 23, 2022

Anyway, I really need to talk less about Rust because I still haven't learned it properly yet!

Keep talking. this is the most enlightening, educational and non-flamewar conversation I've been part of in a long long time on subjects like these.

I couldn't possibly comment as a non-Rust coder on how materially easier it becomes and the resulting value which it brings to the table; we need to somehow weigh the cost of the added challenge of using this language against the amount of effort the reduction in memory corruption issues saves us.

So this is interesting. The reason I learned rust at all was because my boss at the time wanted to build a system in rust and I was on the project. I learned something really interesting from this interaction with my boss. When he speaks of rust he always says "if it builds it runs." I spent a long time mulling that over and I came up with an understanding of what I think he meant by that, other than the obvious rust fandom isn't-this-great meaning.

What I came to realize is that, for him, it is endlessly freeing because rust, with all of its technical horsepower, relieves him of the responsibility of having to worry about writing software. Let me explain.

I first played with a computer in 1979 or 1980 or so. I had access to an apple II+ with 64k of memory (I was lucky). There was basic and there was assembly. And that was it. I learned basic, and when I ran into its limitations, I learned assembly. I took a class in college explaining how gates are made from transistors and flip flops and memory cells and accumulators and all that. After that I realized, I could explain everything the computer does in software from the transistor on up. I could explain logic gates, and microcode and assembly, and algorithms and memory management, and soft switches and how animation worked (back then) and how to count clock cycles and how to twiddle bits. all of it, because that was what there was, so that's what you learned.
Coming from this background I can not for an instant ever imagine not wanting to understand what's going on beneath all of the layers of abstraction and tooling that we have available to us now. even if java garbage collects, I'm going to keep track of where I allocate what. I need to know and understand all of the ramifications of what my program is doing. I can't imagine a world any other way.
But most people don't have my background. Some people were introduced to the world of computers in the era of modern c++. They know from shared_ptr and all the magic glue that entails. I've even heard it said that native c arrays and malloc are bad and should never be used. This is a foreign world to me, but I can totally see how somebody who grew up much later than I did would see that as the basis for their programming universe and embrace it, and... hey, it's kinda nice. You stay away from all the old broken have-to-worry-about-everything native c stuff and the world can be a good place, things work, and you can live life oblivious to how shared_ptr is implemented.
Which brings me back to my boss. To him, (as I imagine it) rust frees him from having to worry about all the stuff I worry about AND gives him the confidence to know that it's ENTIRELY OKAY to think like this, because the compiler will ensure that he doesn't have to worry about it, because, if he can get it to compile, then it all must be good and okay and safe.
Which brings me back to your original point:

the resulting value which it brings to the table;

This depends a lot on the eye of the beholder. I see things waaaaaaaaaaaaaaaay differently than my boss did, and all the safety in the world is not going to stop me from having to understand what's going on behind the curtain and to me, rust provides me zero benefit, because I don't trust it anyway, because I have to worry about all that stuff myself. To my boss, the exact opposite end of the spectrum, it's dreamy perfection because it allows him to guilt-free ignore all the stuff I worry about and just write code that does what he wants, problems be damned.

It must be nice to live that way, and while he and I still have to fight with the compiler and have to suffer its artificial limitations, he gets something for it, I do not.

So, it's really about where you're coming from. To the point that it's trying to make that it can make things safe for you, it may very well bring something to the table for you, as long as you are not me.
At this point I think I'm starting to repeat myself, I just wanted to present some backstory that might explain my point a bit better.

if only those building blocks were robust and sufficiently well-behaved... as well as somehow capable of scaling when composed upon each other.

Amen brother. We have all those a zillion times over already. We need the next thing, not the same thing again.

My thesis is that the current state of tooling with regard to the introspection of execution flow is what's sorely lacking in this space and this is where the most significant gap exists

So this is also interesting. I agree, I hadn't thought of anything like this myself, but you are right. There is a big hole here and it could be an entire field of computer science to explore and learn about it and figure it out and design software better by knowing what's going on with all of the layers of complexity (and parallelism and multithreading, etc). This sounds like a brilliant first step to the next thing.

Rust ... does essentially nothing to address this need (nor did it set out to).

Yup. It is another first generation tool. But I think you might be on to something with the beginnings of a second generation tool.

But while I'm here, I will offer one counterpoint that jumps to mind immediately.
I'm with you, I want to understand what's going on. But (sweeping generalization here) most people don't. They just want to get the job done. And for most cases, they're right. Getting it to basically work, most of the time, is good enough.
In 1992 I had a one-off conversation with a complete stranger and in a few sentences they changed my mind completely on this topic, and what they said, essentially was this: "It is cheaper to pay Intel to build faster chips than it is to pay you to write better software."

And by 1992, they were right. And the situation has only gotten more dramatically correct since then.

So if you are not building a high frequency trading system, or building medical equipment or space navigation software, (there are others but these are my favorite obvious examples) then at least for the commercial world, it really isn't worth going the extra mile to make it good and right and correct, because your competition isn't going to, and if you don't play that game, they'll beat you, and you won't be in business anymore to do The Right Thing.

That said, keep it up. I think we (or maybe just I) know so little about what you're talking about that there may in fact be leaps and bounds of progress to be made that really will move the needle in a way that makes it so worth it, we will look back and wonder how we ever got along without it.

@unphased
Copy link

Hey, thanks for responding! I'm glad that we're on the same wavelength on all these things, and the validation of this confirmation from someone with much more experience than I do is very reaffirming.

So I'd be happy to continue to riff on these topics but I realized as I scan up that i've only probably covered about half of the stuff you mentioned that i actually wanted to add to. I think what I did was hit Submit once I saw that my wall of text had already reached fearsome proportions.

So I really like the bit about the code review and the debugger. I don't know why this is yet, but the concept of sitting down with someone to pair and that the choice of what to do there being to step through a program in a debugger, well, that was a bit mind blowing to me, I guess in the sense of "why isn't this more of a common thing?" After all, the debugger is really the first step toward the holy grail of tools i've been envisioning for a long time: A tool that can render the meta structure of your program for you in intense visual detail (let's say like browser devtools except if the DOM were your data structures, but a lot more hardcore by also giving some way to timetravel to follow the shuttling around of the stack pointer as it weaves across your actual code...), letting you see any and all metrics that may be of relevance to your task at hand.

Starry-eyed daydreams aside, the debugger may well be the nucleus of the vehicle with which we can traverse to The Other Side. At the end of the day what I described above there is basically a debugger, albeit a hypothetical, super roided out one. The function that a debugger serves (and one can argue that any working debugger already does a good enough job of this as-is) is to release you from having to expend your cognitive resources on things like emulating what the computer would do. As coders we spend our whole lives learning how to do this task better, and although I shouldn't say that this is a futile undertaking (as time and time again we're left with it as our only guiding star while the world is consumed in flames around us), it kind of actually is.

It's plain to see how inefficient a use of our gray matter such an undertaking is (emulating software), so we should have -- and I argue 30 years ago we should have -- put our engineering effort toward better debuggers, designing languages that have reflection as a feature, and building meta-tools on top of them. Then we might already have a language that isn't just a toy which can give you an interface like i envisioned there that everyone could be benefitting from, and programming would consequently be 10x easier to get into, and we'd have 100x more programmers in the industry, and our industry could actually be much farther along in its journey toward becoming a real profession (that's a reference to https://www.youtube.com/watch?v=ecIWPzGEbFc).

After all the reason why software is in such a bad state today (with all the bloat and all the breakage all the time) is because in the typical company that makes the software, for any given random thing that breaks, there is only one or two people who know, or even have the capability of understanding the nature of that breakage. 90% of the headcount in that corporate office have no clue about, nor could reasonably be expected to understand, the mind-boggling complexity of modern software, composed of layers upon layers of abstractions, onion layers one thousand or more deep, each one of these abstractions perhaps (in a vacuum) having been designed and implemented more or less sensibly, but upon holistic review of the whole package, a monstrocity every time, guaranteed. Every time, if you had somebody willing, somebody with a big enough brain to fit the operation of the whole business in his or her head, and had a half-decent undergrad level understanding of programming building blocks, could whip up a product that runs absolute circles around it.

Can you imagine the level of dysfunction there would be if the military operated this way? Every soldier is required to be able to field strip their service weapon, and they must be able to do it under time pressure.

It may be unrealistic for me to say that software should be capable of introspection, since it's obviously not a trivial feature to add to a language (and I'd certainly have my work cut out for me if I am to attempt this somehow in a language-agnostic way), but given that the alternative is to relegate everyone to laboriously mentally suffer through these never-ending logical transformations to be able to guess at what's going on, it's no wonder that only a rare few are going to have the aptitude and willpower to put up with doing this for a living.

Speaking of metatools, i'll throw this out there, it's a bit maddening for me to see all of the brainpower being wasted on template metaprogramming instead of e.g. just on using C++ to metaprogram C++, all you need to do that is a library called clang.

So yeah it would appear that there's a lot of cargo culting going on. I guess your boss's mentality is a symptom of this too.

I'll surely have other ideas to add but this is good for now.

@unphased
Copy link

Ah, ok so I wrote that before having finished reading your most recent post @nixomose.

one counterpoint

Boy did I ever pick the right internet stranger to solicit opinions from, what an apt place to bring this point up, and that is such a good point. We do have to be careful with perfectionism and this is but one of the reasons, perfectionism slows you down, and in the real world being fast is probably the most important, it's imperative to calibrate for the correct level of good enough so that you can be fast enough to live another day.

I can offer some strategies around this. I'm not sure how well-formed these concepts are, but I do have thoughts on this issue.

Well, to set the stage, the bloat that we see present in software is expanding exponentially, and this is because it follows Moore's law step by step, and also because, just as you explain, the economics dictate it. Clearly it is more profitable to throw together what just works, simply by layering yet more unnecessary complexity, anything that still works well enough to sell gets the green light.

I suppose we are getting slight respite from the moore's law exponential term as of late; Moore's is definitely still tracking the same growth rate as it ever did, in terms of transistor count, ops per dollar, ops per watt... in all of those areas it is alive and well, but in terms of clock speed and single-threaded instructions per clock, that exponential term has attenuated quite a bit, so this leads me to predict that the rate of growth of bloat might slow correspondingly, because the behavior of abstraction type bloat is not something that multicore generally mitigates. This is small consolation though.

Well, I'm definitely not okay with considering this approach to software product as a non-issue. What kind of future are we suggesting if we acquiesce to such a world order? The logical conclusion of that would be to have a Matrioshka Brain (planet sized computer) powered by a Dyson Sphere consuming all of the power produced by the sun, and it's just going to be used for running Microsoft Word 8358382 on it.

So the thing about time-to-market and low quality gets the job done is this is totally how a lot of the world works. However I think there are some bits of the world that work a little differently, where performance does matter, where this race-to-the-bottom is not necessarily in play. One of them is the world of gaming where it's clear to see that these devs are by and large doing a decent job of getting as much mileage as they should be able to out of the advances in hardware. Rendering quality in games improve in direct relation to hardware capability. Another would be HPC. Maybe it's a very different computing and software environment compared to any other, but a great deal of the advancement of science as well as perhaps commodity services in the burgeoning AI boom are tied to HPC making efficient utilization of technology at its disposal.

But it's not really limited to such places. I think the success of Apple products as a whole is a testament to this notion of quality triumphing over an absolute focus on lowering cost. Apple couldn't really compete on raw performance for quite a while there, but the dominance built upon what I can (hopefully plausibly) summarize as merely well-executed look and feel, getting software right, etc., enabled them to reinvest it into developing what is today very convincing dominance in raw performance as well. Though, just not for gaming. Can't have everything, I guess. 😉

I guess what I'm getting at with this is that, in the context of making a product and earning revenue, sometimes the component of it that relates to look-and-feel or performance is vanishingly small, but other times it's really not. It tends only to come into focus when feature parity exists between competitors though, to be fair. I think also that maybe if we start collecting everything that fits into that HFT, medical devices, space navigation category we might actually find that a great deal of stuff does actually fit in to that group. It's surely not small enough to just ignore.

And another thing relating to performance that i pay attention to these days in apps that I think nobody used to ever think about is power consumption. If you have a CAD app it's going to be a lot more friendly to the user if it doesn't constantly render the scene at full display rate even while the user is not interacting with the computer.

Also I might have got my wires crossed a bit here. Software being built well, to have pleasant and intuitive user interfaces, to efficient use of finite computer resources, this is a passion of mine, and the need for tools in software to better understand how our programs are executing is in support of this. Perhaps it is its own independent desire or perhaps it is merely a means to an end. Being able to better understand how the software works so that we can make it better appeals to me at a fundamental level to satiate the curiosity of the mind, and increases my productivity. But I think I've already covered some ground in terms of how much value it has potential to bring to this industry as a whole, and maybe change the way we that we do things. The potential is enormous.

The way in which it could enable us to create fundamentally better products given the same level of effort is certainly more appealing to me personally than the ways in which it could enable non-coders to contribute to software endeavors, but the latter is more likely to have an impact on the world and I would be remiss to ignore that side of it.

@unphased
Copy link

unphased commented Aug 23, 2022

I at some point had something to say about async code. Since I've already hijacked this discussion I'll just go for it. Might even be able to steer it back on topic since this was about async code after all.

Well, so I've written a lot of async/await JS code in the last few years. I did find that it is a more palatable way to code compared to manually using Promises.

I've also done some work with React. Including modern React which is all about hooks.

There's something of a shared thread (not intentional pun ugh) between these, other than the use of JS, is this "abuse" of these first-class function objects.

There's something sinister about designing the frameworks/language to just throw expensive primitives like this around.

Example: https://stackoverflow.com/questions/67275516/efficient-way-to-apply-backpressure-logic-to-a-generator

I detail a trivial task implemented using a fancy language feature, and the performance I got out of it was laughably atrocious, 10MB/s. The whole point of having generators is so that we can do fancy things with them performantly. If I wanted a slow generator I could have done that 8 years ago with Bluebird or something.

There are so many things about React and how it works and how it's designed that rub me the wrong way. I know how it works well enough to get my work done and maybe even do good work with it relatively speaking, but the whole thing seems to be built as if to say "Look at how clever we are!"

As if that wasn't the worst possible reason that could ever be conceived to add a monstrously large quantity of complexity to something.

They figured out something previously thought impossible. They found a way to make function closures confusing to follow again.

I love the article linked way above about "red functions" vs "blue functions".

I think that potentially the way we deal with this in the "next thing" is to go back to threads because all these layers on top of threads (or coroutines which can't even properly leverage concurrency) are kind of silly and inherently hard to reason about. Like, they are hard to reason about for much the same reasons that genuine threads are. They're basically crippled threads. The argument that they're preferable to threads because threads are that much harder to deal with (a.k.a., the real deal), shouldn't actually be a satisfactory argument.

The real solution is to have tools that can show you, visually, how the threads of execution (be they truly concurrent or various flavors of async) are weaving around, maybe even potentially calculate for you all the ways in which you've given them freedom to do so, lazily generating the nearly infinite possible combinations for you if you need, all visualized, so you can make an informed decision about it, and get on with your life from there. I think there's been a lot of faffing about arguing one way or the other about how to avoid doing the bad thing, designing the language to preempt you at compile time from doing the bad thing, etc., when all you need is a technology to reveal to you the bad thing and all you'd need to do is get rid of it once you can see that it's there...

But there is hope. RustViz looks very awesome. It's a perfect example of the kind of innovation I'm talking about and want to see more of. Really refreshing. They need to make that not just a toy that works on some code and make this kind of thing a fundamental part of a language. I'm handwaving pretty hard here by now but it really does come down to priorities. LLVM IR even better as that serves even more languages. Of course this particular example is just about Rust lang lifetimes so that isn't a real idea, just something to stoke your imagination with. Couldn't you imagine some lines and stuff to represent thread execution and lay them out helpfully aligned to the code that spawns them?

Thanks for coming to my TED talk

@nixomose
Copy link

nixomose commented Oct 2, 2022

My ted talk is more of an endless ramble, but here's mine....

as we have been going back and forth for a while I might end up repeating myself a bit, apologies if I do...

The function that a debugger serves (and one can argue that any working debugger already does a good enough job of this as-is) is to release you from having to expend your cognitive resources on things like emulating what the computer would do. As coders we spend our whole lives learning how to do this task better, and although I shouldn't say that this is a futile undertaking (as time and time again we're left with it as our only guiding star while the world is consumed in flames around us), it kind of actually is.

Spot on. I am forever telling stories about how I know people who, when faced with a bug, will open the code in question with and editor and STARE at it. They will make mean super-concentrate-y faces and beam at the poor code until it fesses up its secrets.

Not a big fan myself. I'm much happier to have the computer tell me what it's doing wrong rather than have to try and figure it out myself. I mean that's what the computer is there for, right? to help me out? I never got why anybody would want to try and imagine what the computer is doing when there are tools built to do all that work for you and tell you. That's why we write test systems. So the computer can tell you when something it does is wrong. Now not all problems can be surfaced by a debugger, but surely lots of them can, and especially it can show you things you wouldn't have thought that might happen like side effects to variables or other memory that you can notice in a debugger when it happens as opposed to many seconds or minutes or days later when the incorrect data resurfaces when you run your program sans debugger and it appears to work right.

https://www.youtube.com/watch?v=ecIWPzGEbFc

thanks for that, I'll have to go watch the whole thing, the few bits I saw sound very familiar and right on.

It may be unrealistic for me to say that software should be capable of introspection
it's no wonder that only a rare few are going to have the aptitude and willpower to put up with doing this for a living.

It's certainly hard to imagine now. I would think the computer would have to think on some level, maybe not actual intelligence, but something more than the neural nets we have now.

Speaking of metatools, i'll throw this out there, it's a bit maddening for me to see all of the brainpower being wasted on template metaprogramming instead of e.g. just on using C++ to metaprogram C++, all you need to do that is a library called clang.

I think template metaprogramming is a sign that the c++ guys have really gone off the rails a bit.
You don't even have to go that far though. When I first saw "&&" in c++ as a way to hint to the compiler that the reference in question should be treated as an rvalue reference, because there may be some ambiguity if you don't. I get why they came up with that, and if you follow the history of how c++ got there, it makes perfect sense. But if you take a step back and say "what are we doing here?" it seems to me that they've just added so much complexity to the syntax that they've had to make things like rvalue reference syntax to dig themselves out. Everything you can do in c++ you can do in c. you don't need to do std::move() all over the place to tell the compiler when to use a particular bit of memory vs copying it. You just use the pointer, and when you want to copy the data, you make the effort to copy the the data, otherwise, you just move the pointer around. I am showing my age though. If we all wrote everything in c, how would we make progress. :-)

All that was to get to the point that template metaprogramming is just the same problem on a far larger scale. somebody clearly has lost sight of something, and they just like playing with new syntactic toys. At the end of the day, you're just making the compiler work that much harder and you're still getting the same x86 machine code out of it in the end. And all you did was make the person reading your code think harder. (see below about comments about how people weren't built to think like this.)

so this leads me to predict that the rate of growth of bloat might slow correspondingly, because the behavior of abstraction type bloat is not something that multicore generally mitigates. This is small consolation though.

Maybe, but it seems the answer to that problem is the threadripper. why do in one core what you can do in 64. Plenty of room to add more complexity. :-)
This just means more people have to learn multithreaded concepts before they're allowed to play hardball. I'd like to think that maybe that's a good thing by raising the bar a bit, but I'm not sure. I don't want to force things to be complicated, it defeats my other arguments about how all this stuff is already too complicated.

, and it's just going to be used for running Microsoft Word 8358382 on it.

heh, you're going to laugh... I have a prediction that once they get quantum computers commoditized to the point where normal people can buy and use them, somebody is going to write an x86 translation layer and run windows on it. Probably the linux guys will get there first. :-)

One of them is the world of gaming where it's clear to see that these devs are by and large doing a decent job of getting as much mileage as they should be able to out of the advances in hardware

I remember somebody once telling me that gaming and porn drive the industry. Makes sense. To that I will add this: I think people praise google for the wrong thing. Yes they are indexing all the world's information and have the most bad-ass advertising system in the known universe, but the one thing that google did that really made a difference was: they raised the bar. The hardware had long been capable of quite a few amazing things, but everybody wrote lousy bloaty software until google came along. My favorite example is google maps. Harken back to mapquest and yahoo maps before google maps was released. It was worlds better in every way, and you've never seen so many big market players play catch up so fast until that moment in time. So I would say, gaming, porn and google make the world a slightly better place and are what push the envelope of the hardware.

Another would be HPC.
Then there's also this crazy ass shit: https://www.youtube.com/watch?v=ZDtaanCENbc
Gotta have respect for mainframes.

I agree with your point about apple, (though it was to selfish ends) so we have gaming, porn, google and apple. Oh and bitcoin mining. Offer people free money and look what happens.

Software being built well, to have pleasant and intuitive user interfaces, to efficient use of finite computer resources, this is a passion of mine,

you were born at the wrong time, you would have loved the 80's.

There are so many things about React and how it works and how it's designed that rub me the wrong way. I know how it works well enough to get my work done and maybe even do good work with it relatively speaking, but the whole thing seems to be built as if to say "Look at how clever we are!"

So... since I'm just filling up this conversation with my opinions... I think the problem stems from the base problem that the web was a bad solution to the problem nobody quite knew we had. This is totally going to derail the conversation again, but I must.

Here comes my "full circle" rant. In the beginning we had mainframes and the dumb terminals attached to them. The dumb terminals displayed what ibm called a "presentation space." It was a map of characters on the screen that were either for display or input. The mainframe would spit out a page/map to the terminal, the user would fill in the fields, and hit enter, and it would send the input data to the mainframe, which would churn away on it and spit back the next screenful/map of data. Sound familiar? In between mainframes and the web, we had "fat clients" running on PCs. Seems to me that was a better deal, but I probably say that because that's what I grew up with. It just makes more sense to me to spread the cpu load out over all the end-user's machines and not dump it all on the server thus requiring endless server farms. But that's a different story.

I think nobody really saw the web coming, certainly the pc guys (like me) who didn't have experience with the mainframe and its submit/churn/render cycle attempted to recreate it and didn't get it quite right. The limitations of the dumb terminal were defined by ibm (or whoever made the 3270) and that's what you had to deal with, but now that you could willy nilly add extensions to the browser, you could do anything (see https://en.wikipedia.org/wiki/Blink_element)

It was all based on nothing more than hypercard and the idea that you could click links to go to the next thing. What the world really needed was a super well defined fat client that a server's application could control really well, more like rdp than vnc (not a great analogy). Or like the way prodigy did it's client user interface. It had a lot of well defined controls that the client could easily and quickly render. The web was not designed to do what everybody's making it do and that's why it's a giant mess of bad render standards. Give google credit for trying to fix that problem with chromium and the chromebook, and all that, but even with all their mighty power, they couldn't pull it off. You can see the effects of the brokenness when you accidentally drag the mouse across some text in a browser and watching it highlight text because text selection is a default browser mechanism. If the browser were a real display/presentation space client, that wouldn't happen, and you wouldn't have all that fiddly keyboard-shortcut grabbing that the browser does when you hit ctrl-f in a google doc. Anyway, I'll stop now, you get my point. The whole thing wasn't designed and it's a big broken bloated mess, and nobody has the power to fix it.

The palm pilot was the last time man had the chance to fix this, actually I guess android is. android's a bit of a mess too, but it is the right thing. It is a well defined (if ever a moving target) interface that works in a fat client way and gives a good solid experience when done well. (just like you said with apple.)

designing the language to preempt you at compile time from doing the bad thing, etc.,

That is rust in one sentence. :-)

when all you need is a technology to reveal to you the bad thing and all you'd need to do is get rid of it once you can see that it's there...

So here's where I think the problem is: This Stuff Is Hard. Human brains were not designed to do this natively, intuitively, whatever you want to call it. It requires a style of thinking that is just not normal. normal as in "reach down to the ground with your hand and pick up that ball" normal that every 3 year old can understand. That's what human brains were designed to do.
I really think that given the current way we do things (with these silly languages we keep writing) there is no simple way to make this go.

I think threading as in native threads will always be hard, people just aren't built to think like that. we can force ourselves to think that like for a little bit, but neither you or I will be able to imagine 1000 concurrent threads all working around each other. We just can't.

On the topic of "designing the langage to preempt you at compile time" you made me think of this:
what if we could limit the availability of threads to some limited construct that is easier to use. I don't know react or any js web stuff so maybe they already do this. But instead of native threads that you control the start() and join() of, perhaps only allow a nugget of code to run (a function) that the system's thread handler manages for you. Actually I just realized, I invented go routines, that's exactly what they do. They tried to take some of the complexity of thread management away from the programmer. I was thinking taking one small further step. Force the programmer to only submit "process the work items" units of work and have the system-managed threadpool process the queue as best it can. I think this only works for certain types of tasks though, and as such go routines handle that case pretty well as it is. Okay, never mind, not solving any real problems here.
But back to the start of my point: I don't think there is a way to make it easy, it is just a hard problem that human brains weren't designed to deal with well. Here's a sub-rant if you want to follow it: http://deadpelican.com/howthebrainworks.html It's not 100% right, but a lot of it makes sense to me.

We really just need to build the next thing, and that thing will take concepts, and write the software and figure out how to thread it and what variables/memory areas have to be locked across threads so they don't step on each other.
Maybe my brain is too little to think larger than that, maybe there is a better way, but I'm not seeing it.

I see what you're getting at about the rustviz thing, I believe you have a great idea about a good direction to go in the future. It's going to be hard to get there, but I think you're definitely on a better track than the industry as a whole.

@unphased
Copy link

unphased commented Oct 2, 2022

Hey I appreciate you taking the time to come up with a response. In the interest of not letting my browser potentially eat large swathes of queued up responses I might attempt a response in piecemeal fashion. Maybe I'll come back to edit it. I don't know how much havoc that might wreak w.r.t. the original blog post site. I'll edit something and check on it.

If we all wrote everything in c, how would we make progress. :-)

Linux sure seems to be making a lot of good, useful, highly visible progress. Other software that fits this category that is inspiring which I also use heavily include tmux. The list is short though.

@unphased
Copy link

unphased commented Oct 2, 2022

rvalue references, "&&" syntax

Yeah I have wrestled with this for years now. Each time I seem to be able to gather a lot of willpower to expend toward making a little more progress toward really understanding it to the point where I need to be to be able to write lots of code without getting sucked into the mode where I'm researching how properly write the code. But it usually ends up with the result that I'm not able to actually get much done because at the end of the day I still do not know how the stuff works. You're right. It's a problem. There is indeed too much cognitive overhead. This feels like a compelling argument to stick to plain C pointer slinging and using whatever C++ features remain sensible to use (STL containers etc, smart pointers, hopefully)

@unphased
Copy link

unphased commented Oct 3, 2022

As for the full circle thing. Yes things tend to come full circle a lot. In software we do have these things that are almost like fashion trends, they are definitely cyclical and the fat client/thin client thing is one of the big ones! All of these things going back and forth are just people making bets on taking a certain approach with something and then the market eventually revealing some degree to which their executions are being favored by users.

I think it really doesn't matter if an idea is fresh and trendy or 30 years old. We really should try to avoid the bandwagoning and just choose the tools that we use based on their own merits.

This Stuff is Hard

Yeah I agree that there are some dragons hiding in some icebergs round these parts

Human brains were not designed to do this natively, intuitively, whatever you want to call it. It requires a style of thinking that is just not normal. normal as in "reach down to the ground with your hand and pick up that ball" normal that every 3 year old can understand. That's what human brains were designed to do.
I really think that given the current way we do things (with these silly languages we keep writing) there is no simple way to make this go.

I think I will argue with you on this.

I think if we are able to get millions of people around the world to passably simulate what computers do and make useful productive things out of writing and reading code, in the way that we have been doing it, there is nothing built into our nature that should or could prevent us from (what I presume to be your point, the difficulty in) effectively reasoning about multithreaded software. Concurrent execution of machine code, if you will.

It's hard, certainly. It's hard enough to understand a typical singlethreaded application. The complexity of dealing with mutating state across a large number of threads which may proceed (or not), independently (or not), and concurrently, well it's a lot of extra state to keep track of, but the truth is that if you had a hundred threads you actually need to keep track of, this really is only 100x more state that you need to deal with. This is not some kind of intractable computational problem (that, let's say, a factor of 2^100 might be. Though it's true 2^100 might be some sort of approximation for the number of possible ways in which 100 threads might choose to interleave with each other). When merely looking at what the state of the system is at any one time, It's just a multiple of your stack and your registers and whatnot.

We can use a normal debugger like gdb that's been around for a while and it's more or less usable for stepping through a singlethreaded app but it's probably fair to say that even though gdb lets you view, switch, and step between all the threads in your app at will, the limitations inherent in its command line (or even script) interface come to the foreground. You can't just suspend threads and expect an application not to impode or degrade in a way detrimental to your investigation.

I suppose it's easy to throw your hands up at this point and just say oh it's too much work or it's just too hard to expect anybody to keep track of it, but that's really the same kind of thinking that applies to trying to write machine code or assembly code by hand to program our computers. So why don't we have effective tools that let us debug multithreaded applications? I think there are a lot of such tools, maybe. I think the fact that we don't have something for this that everyone is already using doesnt mean that such a thing is impossible.

I suspect that it might not be until the spatial computing revolution really kicks into gear that we can experience what it would be like to visualize these things that we thought until now we couldnt visualize, or that we shouldn't or something like that. Already with linux we can trace the execution of programs in so many ways, and with it not having even been terribly costly to outfit my workstation with nearly 100GB of RAM there's nothing that conceivably stops us from (even 10 years ago, say, when it was already feasible to have so much system memory) being able to produce very detailed execution traces of multithreaded applications and store all of the metadata you could want to have and have some sort of graphical tool with which to visualize this obviously very overwhelming data.

So I agree that there are some Very Hard Things but I do not think that all of us will collectively go insane if we tried to comprehend the weave patterns of sophisticated multithreaded apps. I think we would all be delighted if we could do that somehow. It's building such a tool that can generate this for us that will be Very Hard. But worth doing.

How the brain works

Yeah I think if we took out the rigid definitions used there for left vs right brain (I think the modern wisdom is that this distinction is oversimplified) there isn't a lot to argue with it, after you've read a few of these and attempt some introspection I think it is as obvious as some of these authors say it is. We clearly have lots of cool acceleration structures. I would argue that we probably don't just have two layers of operation (that is to say, the two modes of autonomous muscle memory vs conscious thinking). And thinking is not sequential like computers are. Neurons are all concurrently operating and processing.

The way my thesis connects to our brain is very simple and I'll draw a connection to your reaching down and picking up a ball comment as well. I think spatial computing (I guess this is the moniker I'm starting to use in lieu of AR/VR) is what's finally going to allow us to fuse more of our senses together to get back to this level of intuition. I think when we can view our data in three dimensions and interact with our data as naturally as we can interact with physical things, that is a kind of starting point. I guess the only practical way for me to share my viewpoint is to encourage people to explore the world of VR games today and use that as a jumping off point for envisioning what the productivity software of the future might look like.

Copy link

Good essay.

I think the problem is the wholesale acceptance of “async” functions as being good, necessary, or something. For heaven’s sake, it comes from javascript. Don’t we know better?

Without writing an essay myself, let’s run through how we got here:

  • I/O - the physical reality of it - has been async forever. It was when I was writing keyboard interrupt handlers in Z80 assembly in 1983. It is now. You’re asking hardware to do something and let you know when it’s done, or tap you on the shoulder when something interesting happens. The only way to do that reasonably is to give the programmer a way to say “When X happens, call this code”. No way around that.
  • Sometime in the 90s the entire industry took a horrific wrong turn, and collectively decided that unless you’re an OS developer, you needed to be protected from that reality, and language and runtimes simply MUST create the illusion for developers that they are writing a shell script for DOS on a 286 which could not POSSIBLY be doing anything but waiting for I/O to complete, or their little heads would explode, and after all they’re just writing tinkertoys anyway, so who cares?
  • Hardware vendors loved it, because developers weren’t just forced to write software that couldn’t efficiently use the hardware, but an entire generation of developers came up NO IDEA that they were programming to an incredibly expensive illusion that ensured the software they wrote scaled badly. I worked for Sun in the heyday of J2EE. Believe me, hardware vendors loved it. Heck, the entire business proposition of Amazon EC2 adds up to “You get to keep using incredibly inefficient tools and scale anyway, because your brain is too small to handle doing it right”. Same for the industry fetish for microservices - which are great when you need them, but if you think that’s all the time, the only thing you’re maximizing is your cloud bill.
  • Along comes NodeJS, which PROVES, warts or no, that developers actually can happily build real services using callbacks, by… doing it. Lots of them. Their heads didn’t explode.
  • Along come there folks who paternalistically decided for us all in the 90s “Oh, noez! Callbacks are too hard!” along with those developers who started coding since 1995 but lived under a rock, shouting “Wait! We’ll save you!” and invented async/await - because you need to be protected from coding to a model that matches the way a computer actually, physically, works. Because that’s good for… well, not anything, really, unless you just really need to feel like you’re writing a DOS script for a 286. Async/await in Rust is a mistake.

The problem - and this is a thing I dearly wish language designers would get - is code DISPATCH - how things get invoked. Lots of languages have innovated in lots of ways, but nobody innovates on dispatch. What is actually needed is a way to say “here’s an ordered list of callbacks. They can require some types as input. They can emit some types as output that can be consumed by subsequent callbacks.”

Then the entire “callbacks are too hard” problem disappears - because you’re just writing small pieces of synchronous code that get choreographed - if you want to read a file, what you expires in code is “the steps that come after me should have the contents of file X available to them as an argument if they take a file argument.

I actually built a Java framework with roughly that model - building on the observation that, if you turn it sideways, what Google Guice really is is a constructor dispatch engine. But a framework where you do all of your work in constructors of objects you throw away was a little too weird - and it IS kind of a hack on the fact that a constructor, via its type, is the one thing you can reference by name in Java’s type system, without dictating it’s argument signature in the process. It works beautifully, and the ability to carve up the decision tree of responding to a request into discrete, named,reusable steps is a thing of beauty.

It seems to be Rust’s type system could be more amenable to something like that, with a bit of an assist from macros and trait methods.

Your point about closures and infection is well taken. So name them - let them have actual types. Let them emit potential arguments to subsequent pieces of logic into a “context” that has ownership of them, and lends them out to stateless bits of logic that cannot possibly leak a reference to something that is not another argument it was called with, and therefore owned by the same call context. It’s basically just message-passing where the messages are whatever types the logic has as arguments, while the context is effectively the stack but with a mechanism to resolve data by type.

Threading mostly disappears as a consideration, since the code in question doesn’t care what thread it’s run on or if the previous step ran on a different one. You still have single ownership pretty easily.

To summarize, the problem isn’t that Rust has to be bad at async - the problem is trying to do it with the wrong set of abstractions.

@nixomose
Copy link

nixomose commented Jan 31, 2024

Well, what a difference a year makes. This thread fell off my radar, but here it is back again.

I do so enjoy this chat.

So, we've deviated a good distance away from whatever rust thing we were talking about and I've long since forgotten even more rust than I had before, but you make some more interesting points I want to comment on, especially as they relate to the passage of time since my last note...

Forgive me, I'm going to talk about (what little I know about) AI.

I said this: "
The paradigm hasn't been invented yet, I don't know what it will be, but I don't think we'll be typing code into ever fancier editors, 100 years from now. Yet all we do is keep reinventing the same languages over and over. we keep writing the same string handling library for these new languages over and over.
"

and later, you said this:"
and with it not having even been terribly costly to outfit my workstation with nearly 100GB of RAM there's nothing that conceivably stops us from (even 10 years ago, say, when it was already feasible to have so much system memory) being able to produce very detailed execution traces of multithreaded applications and store all of the metadata you could want to have and have some sort of graphical tool with which to visualize this obviously very overwhelming data."

The tying point here is about seeing the future and the past and look how much changes. Nobody can predict the future very well (although I will hazard a guess that something bad is going to happen on some tuesday next november) and nobody 4 years ago could have foreseen the hype cycle that has become AI.

I've been making comments about how there will be some paradigm shift away from the 'programming' we do now. I don't think AI is it, but AI is a step in a direction away from the same repetitive thing we've been doing over and over again.

I have to make one comment that I point out to people because I feel so vindicated by this. Have you tried using github copilot? It is amazing. It is more amazing than chatgpt and dall-e. You can read all over the place all the wonderful things copilot does for everybody, but for me, it does something special: It proves beyond any reasonable doubt that we have been regurgitating the same crap over and over and over and over again for so long in so many similar language, that it is now possible for a computer to pretty accurately predict what I'm going to type next, because it has been done so so many times before.

You said: "
I think if we are able to get millions of people around the world to passably simulate what computers do and make useful productive things out of writing and reading code, in the way that we have been doing it, there is nothing built into our nature that should or could prevent us from (what I presume to be your point, the difficulty in) effectively reasoning about multithreaded software. Concurrent execution of machine code, if you will.
"

I agree on a somewhat limited scale some of this should be possible, there is nothing at the basic level that a computer does that a human can not do fairly easily (the math, the branching, the xors, etc) but it's the scale that goes beyond the ability of one person. So yes, you can get all the people of the world to participate (actually I think this idea was presented in one of the Three Body Problem books, good read by the way) and simulate a giant computer, but the networking of information between all the people nodes gets onerous and time consuming and at some point it will fall apart. Time is the problem. If people (nodes) die out before a complex operation can be completed, the computer doesn't work. Which brings up another point, humans are good at standing on the shoulders of their predecessors... to a point. It takes a long time for some really smart people to arrive at some new truth or algorithm, but from then on, others can simply take it as truth, and make progress from there. But some of the truth is hard to understand (quantum mechanics, for example) so it takes a while to gain enough background to even be able to understand it before you can start to make progress. At some point, it will take an entire lifetime to achieve enough understanding to get to the point where you can first start to make more progress, and then what? You've hit the human brain limit. Now obviously not all things are or will be like this, but there will be some things that can't be achieved by one human brain.

Oh but back to the point I wanted to make about your comment: yes to a point people can understand and manage to comprehend it all. But here we are, right now: AI has proven that we've already moved beyond that. We have finally built a computer system that is so complicated no human or set of humans can understand it. We have a chat gpt program that can do things and nobody can explain how or why. It obviously works. You can poke at it and prod it and see what effect it has, but no human can possibly simulate or think in their head enough about what's going on with those zillions of digital neurons to understand why it works the way it works or at all, or if it even is working.

A friend of mine told me that it seems if you threaten chat gpt with something like "if you don't give me the right answer I'm going to hurt a puppy" it will actually yield measurably better results. This little nugget is fascinating on so many levels.

  1. nobody can possibly reverse engineer what's going on, so they try random things to see what happens, because that's the only technique available to us at the moment.
  2. the innate nature of threats/harm to animals is baked into the corpus of data from which the language models were trained (oh what does that say about humanity)
  3. we have finally built something we can't debug.
  4. there's more but I can't think of any off the top of my head at the moment.

I also want to make a comment about irrelevancy. Sometimes we make huge leaps, one day we are not able to travel to the moon, the next day we are. But mostly progress is a series of small steps as we add to the successes of those before us. But what ends up happening is that thing old things become irrelevant.
I'm sure selling buggy whips used to be a big deal to a lot of people, but with the advent of the horseless carriage, not so much anymore.
All of our rust problems, our threading problems, our front-end-ui-programming/javascript/css problems will someday be irrelevant because we will have moved on to something else.
Some problems don't need to be solved, they need to be moved away from and made irrelevant.
Sorry, that was just an unrelated tangent I wanted to get out.

Back to the story at hand:
So I have a good friend who's neck deep into the AI stuff. He really understands more than most what the state of the art is and how it works, as much as anybody does. He tells me, most of it goes over my head, but I understand some of it at a high level, and I offered this analogy:

If we as a human look at a list of let's say 10 2-3 digit numbers, after a few minutes we could probably add them up in our heads. Certainly with a pen and paper, we could come up with the right answer.
Now try and imagine what an AI model is doing. There are bazillions of digital neurons feeding results back into themselves bazillions of times honing a response across all of these little data bits (you can see I don't really know what I'm talking about here) and in the end, it spits out some words.

Given that, I asked my friend: "I have this idea that all an LLM is really doing is being like a human with a super huge capacity to look at a list of numbers and add them up in its head. I mean at the lower levels, the digital neurons aren't doing anything that complicated, it's the ability of it to do it at scale with lots of data that ends up resulting in the amazing result we see. All it's doing is basically looking at a really really really large list of numbers and not doing anything much more complicated than adding them up."
And his response was "Yeah, basically."

So there we have it, the very thing everybody is marvelling at about chat gpt is in fact just a small brain with a really really really large capacity to manage lots and lots and lots of data very quickly.
And that's all it is.
And because of that mammoth size, we, us simple little one-brained humans, can not understand how it works.

I've got a bunch more but it's getting late, I'll save it for another time.

You also said:"
I think when we can view our data in three dimensions and interact with our data as naturally as we can interact with physical things, that is a kind of starting point.
"

That's certainly one possibility, and maybe it would work out, but given what's going on now, I don't think that's the direction we're headed in.

I don't think computers as we know them will become irrelevant any time soon. I think this AI thing is a hype cycle and like the internet and cell phones, we will add this new technology to our arsenal of tools we use to get things done, but maybe rust isn't the future. :-)

@nixomose
Copy link

"
I think spatial computing (I guess this is the moniker I'm starting to use in lieu of AR/VR) is what's finally going to allow us to fuse more of our senses together to get back to this level of intuition.
"

I think you're right there is potential there for something, a new way of thinking that allows us to take advantage of the abilities of the design of our brains, but clearly we're not there yet, and most people who might be able to make progress in this area are probably being distracted by shiny and currently exciting AI things.

That's just a guess though.

@Phlosioneer
Copy link

That's an impressive bit of necromancy, @nixomose.

I take some issue with your description of LLM's. I think it's extremely important to keep one critical limitation of LLMs in mind: they have no working memory. Not even a single bit of information. They have the text that they have written, that you have written, and their training. From that information, they predict one word-token. Then they start over from scratch for the next word-token.

This has some extremely important consequences. First, LLM's cannot plan ahead. They cannot start explaining one thing as groundwork for explaining another more complicated thing. They cannot structure essays, stories, or documentation. In the case of programming, they cannot initialize a variable at the top of a function in anticipation of needing it in a loop later on.

Second, they can trip themselves up. Their output is inherently probabilistic, which means that each word (at appropriate places, like noun-subjects) they output has a small chance of injecting a new unexpected idea into the text. Then, once they've output that unexpected word, they are beholden by their training to do everything in their power to make that word make sense in the context. This is how "hallucinations" happen; a single improbable word is output, it cannot be erased or undone, therefore the AI must force it to make sense.

Third, LLM's are unexpectedly dangerous to rely on, because of how convincing they can be. I have some firsthand experience with this. I work at a medical tech company, and we tried to use an LLM to help us write documentation. The goal was for it to supply a broad template that could be filled in with more technical details; it was never supposed to provide any substance or facts to the document. It was a pilot program, so each thing written was carefully looked at by multiple people. A document about a process was written this way, passed through several people looking for factual errors, and then was added to "production" documentation hub. Soon, there's an incident, and someone needs to consult that document. It was wrong, and there were consequences (not medical ones, but financial ones). Turns out, the document only read correctly if you already knew the correct process. If you didn't know, then the phrasing and tenses of the document lead the engineer to believe a system was able to handle an error, when it was not. (I have to be vague here, company stuff.) The error was extremely easily avoided by someone unrelated to that team writing a parallel document, because they understood the meaning of the words they were using.

Bottom line, the program was scrapped because it was too risky to rely on anything AI generated, even if reviewed thoroughly.

LLM's are a good answer to some problems. But they're not thinking machines. They're not AI as we generally think of the term in pop culture. They can't be applied to all problems, even if they look like they could be applied. I shudder to think of how many C files Github Copilot ingested with sketchy pointer arithmetic that was safe in the context of that project but not safe in general. Or how many C++ programs it was trained on that used/abused operator overloading.


As for your claims about debugging LLMs, many experts in the space have pointed out that the lack of debugging and insight into these models is because there's no funding for it.

Look at how long it took us to develop debugging tools for deep neural networks. Now, we can pretty accurately visualize what each layer of a neural network does (stepping up the levels of abstraction), what patterns individual neurons are looking for (tracing what pixels' values contributed the most to a neuron's activation function), and what images maximize the activation of certain outputs (deep dream, faces of dogs and cats in fractal forms).

We're already seeing some extremely early efforts to debug LLMs, revealing that early forms of chat GPT and similar LLMs are/were extremely vulnerable to certain tokens that corresponded to reddit usernames from r/counting. Use of those tokens immediately after they were discovered caused the models to go haywire and produce random nonsensical outputs, completely unlike any similar prompts.

It's not here, but that doesn't mean it's unknowable. The best debugger software for computers had to wait until someone would actually pay for it. The same is true for AI.

@unphased
Copy link

unphased commented Jan 31, 2024

I've got so many more little comments to stuff in this thread...

I'm much happier to have the computer tell me what it's doing wrong rather than have to try and figure it out myself. I mean that's what the computer is there for, right? to help me out? I never got why anybody would want to try and imagine what the computer is doing when there are tools built to do all that work for you and tell you. That's why we write test systems. So the computer can tell you when something it does is wrong.

This was one of the parts where while I'm reading it, I genuinely can't tell if I was the one who wrote it. This notion right here is what's come into the forefront for me over the past few months ever since August when I found and commented on your article. Actually I now realize looking back that much of the formative ideation rubber ducking for my big pet project now took place right in here.

Doing some experimenting around general (multi language) software metaprogramming. And as part of my project to enable the practical instrumentation of introspectability into arbitrary software, i'm building a unit/general test runner engine from scratch. One thing I've been realizing as I build my latest test automation framework at a low level (which will expose you to import/loading/compiling details, which are important to get into for optimizing test dispatch), testing is actually a really pure manifestation of the scientific method. The scientific method is explicitly about proposing hypotheses and then testing them. In software we are often operating in an unfamiliar realm and operating under various constraints, and the testing that you can do could encompass anything to do with your environment, anything you could imagine: From teasing out microscopic nuances in logic and arithmetic (through which you can poke at the capabilities of your language or execution runtime, e.g. I always end up with tests to validate IEEE754 floating point quirks, why I'm repeatedly drawn to that I'm not sure), a sort of testing so low level that I might refer to it as being metaphysical, ranging to end to end tests where (this is a sort of extreme thought experiment, bear with me) you put a box around your entire application, user interface and all, and poke at it using HID commands, e.g. maybe with a VM, maybe with a physical computer and driving it from an IP-KVM device. Like, in software anything is possible, and by and large everyone understands the importance of testing, at least, so the amount of testing that I see done manually is still shocking to me.

I can't claim to have any remotely comprehensive awareness of research in these areas, but it would appear to me that practical endeavors such as testing software and tracing the execution of software are guaranteed to eventually lead to truths about that software that you can benefit from, as opposed to, say, attempting to construct a language which may facilitate the ability to mathematically prove that a program written in it will behave correctly. That is theoretically nice, but practically remains about as far away from reality as physically transporting yourself to another galaxy.

But most people don't have my background. Some people were introduced to the world of computers in the era of modern c++. They know from shared_ptr and all the magic glue that entails. I've even heard it said that native c arrays and malloc are bad and should never be used.

I was born in 89 and the time i got into programming C++ was 2006 or so I think, and it was a magical time as I was able to just use the internet to learn how to take the source code for a 3d physics engine and beat it into submission to make fun little games out of. I would have been working with C++98 (which was, simply, C++ to everyone at the time) and then C++11 came soon after to shake things up, though that wasn't going to land until around the time I graduated college, where I had the privilege of being taught CS fundamentals in C with the Unix context and history. I took a bread-and-butter computer engineering 101 course in my senior year as an elective, and it was one of the best decisions as it gave me even more context for how to better understand these aspects of the natural world that I had come to care deeply about. I remember having a bit of a feeling like, "oh damn that's it?" when I got to the material on logic gates... as that was the point at which my understanding could be in some sense be considered contiguous going down from application software where we're doing things with computers akin to thinking, down past the logical operations and now going all the way to the new frontier of my understanding, (which has been static since then) which would be the quantum physics that drive the behavior of electrons at the P-N silicon junction found in every transistor. Which has been a very nice perspective to have. Having a better understanding of the world is not about feeling superior to others or even to use it to get ahead or make more money or anything else, so much as it is about feeling fulfilled. Somehow I have the gut feeling that the meaning of life is to just satisfy curiosity. To an extent it's unfulfillable and maybe that's why it's so elegant/perfect/bittersweet, there is always something to occupy one's curiosity with. Anyway I guess I'm rolling with this until something changes.

I think if one is learning about this stuff and cares, like... really cares about how all these pieces fit together, I don't think one necessarily needs to (although would probably benefit from) a world class higher education like I had, where even back then, for the specifics of what we were learning, we were directed to reference the internet anyway. Best tool for the job.

Maybe we can summarize this by saying that there are two kinds of people as exemplified by you and your boss. My wife and I for example are in opposite categories! If I had to guess only a fraction of one percent of individuals are in our category. Actually now that I think about it, I think we could probably just label these two categories as "Masochists" and "Non-masochists".

Although as an also-somewhat-outsider to Rust I'm perhaps not so convinced that it holds low value even for me (though for that to be the case I'd have to learn it enough to be fluent in it first). It sure seems like there have got to be certain applications where shared state isn't going to be much of a factor. I do have a mental block with it in which I have next to no confidence that the borrow checker wouldn't leave my productivity in shambles. I can see that it's simply two sides of the same coin though, you can pay for your memory management sins in one way or another but the universe will collect on them for you... The notion that "unclean" memory access is forbidden at compilation time is not problematic in itself, as compile time insights in general are very desirable for coding quality of life, I think the place where rust really comes up short is pretty much exactly what you said: for folks like us, who don't actually have PTSD from segfaults, and I'm NOT expressing any sort of elitism for not having PTSD from segfaults, I'm just stating that I don't fear them... what I fear instead is it looks like there will be classes of programs I may hypothetically want to author at some point which Rust will force an uncomfortable choice on me of having to write unsafe code in order to achieve, and even the very reasonable notion that some very optimized programs may need unsafe code blocks, well somehow this kind of takes away some of the sheen off the appeal of the whole thing. The other thing I was sorely disappointed in with Rust was that, like C++, like any other language I've ever seen, macros and metaprogramming (let alone introspection, by which i mean facilities in the language or runtime to let you get information or feedback on program structure, program state or program execution) are an afterthought. Indeed the only instance in which this is not the case is Java, but there too it is also an afterthought. It is just a lot more advanced compared to anywhere else. But I won't be caught dead using Java. From my cursory research on the topic, rust macros may not be as convoluted as C++ template metaprogramming, and may be cleaner than C preprocessor abuse, but in no substantial way does it apparently provide capabilities above the combination of those. So, being already familiar with them (C++ TMP and preprocessor macros), and already having the stance that it's just more practical to leverage additional machinery to perform turing-complete metaprogramming with, neither Rust nor any other language gaining any ground recently has made any strides forward in this area, so I'm forging ahead with this approach of bringing my own toys to the playground. those batteries they include in the package are just trash.

Like, especially from a perspective like mine where basically you're still going to have better ergonomics with a garbage collected language like typescript that has types, when I want to drop down to a systems language, often (but not always!) that means I intend to get a bit nasty fiddling with buffers. So probably how this has to go is, if I just need speed I should probably choose Rust. If I want to really do dirty memory buffer shenanigans, I will not hesitate to reach for C++.

These days I feel even less FOMO about not rushing into learning Rust. I'm really confident now that I could learn it or any other language worth learning, effectively, when I decide it's something I want to do, in a Just-In-Time approach, with AI.

Speaking of LLMs yeah I've been using copilot for nearly two years now, I've been paying for chatgpt plus for nearly one, and I'm feeling quite good about the progress in open-weight LLMs which I'm able to run on both my laptop (M1 max 64GB) and workstation (NVLinked 2xRTX3090), there is no shortage of side projects, that is for sure. LLMs are stupendously promising for all kinds of code related work but it's probably fair to say that it's going to be a while yet before we could really give them instructions and have any sort of confidence that they'll interpret those instructions in a way we intend, which, this would be as good a definition for AGI/ASI as any other after all.

Another tangent I have, to add onto a theme i've seen repeated a lot in this thread already, there are a lot of swinging pendulums/fashion cycles that we have observed in tech over the years (totally incomplete list):

  • thin clients <--> thick clients
  • asynchronous events <--> papering them over with inefficient abstractions
  • rendering pages on the server <--> rendering pages on the browser client (this one is a subclass of thin/thick clients)

I just want to mention that the latest trend in web development is server side rendering (SSR), a reaction to the likes of React (too long time-to-first-render what with the loading/parsing/compiling of the kitchen sink), with a return to rendering pages on the server again. This would be exhausting if that feeling weren't so overshadowed by the sheer comedy of it. I'm not sure what the correct response is to this silliness as you can see even Rust got heavily swayed by the trendy async await hype that was going around. I'd say being able to chain sequenced callback code without runaway indentation and not laying sheer waste to block scoping variable rules (and let's not forget proper exception handling of async events) has some amount of merit.

Many of us love to make fun of the JS dumpster fire of an ecosystem. I'd say it's justified, as someone who keeps coming back to it. I haven't escaped it because I'm building my auspicious project in typescript, and I was dealing with some ts-node issues the other day... and the solution? just drop ts-node, use tsx, no not the typescript jsx file extension tsx, the tsx that is basically the same exact thing as ts-node, just one that didn't turn to shit... yet, that is...

But this endlessly exhausting fashion-as-a-service whiplash central rugpull culture is merely a byproduct of visibility and velocity. There are more people looking around trying to be clever coming up with new ways to do things... So far it hasn't proven to be impossible to figure it out, or at least figure enough out to get something done at the end of the day. I was never sold on React to begin with, throughout its meteoric rise and now the too-little-too-late but inevitable backlash as it has now also firmly implanted as an enterprise frontend staple... trying to put my finger on why it's all so exasperating. There is a compoent of I-told-you-so to this one.

@nixomose
Copy link

nixomose commented Feb 7, 2024

so the amount of testing that I see done manually is still shocking to me.

This I think I can explain a little. I mean some of it will be "this is how we've always done it" and it just takes a while for the new thing to permeate through the entire industry, but I also think there will be manual testing in some form.
I mean you can as you say, write something to prove a program mathematically correct, which is nice, but not very practical in the real world. But manual testing is. It's the most real world thing there is, at least in some cases where a real human uses the software in the same way a real paying customer user will be using it. No amount of unit tests, system tests or end to end tests, will be as good as a human. Because the test can only do what the programmer thought to write a test for, and let's face it, the programmer is biased towards use cases, and failure cases they planned for. "I'm going to test all the code I wrote." But it's really hard to test all the code you didn't think to write, and that's what manual testing gets you.
I'm guessing that you and I come from different backgrounds in terms of how and where testing is done, so our scope of what encompasses "manual testing" is probably very different.

I was born in 89 and the time i got into programming C++ was 2006

So that's really interesting. I was born in 71, which never seemed like a long time ago until recently, where now everybody is born in the 90's and 2000's, and I was very lucky because I was young and accidentally paying attention during the birth of the microcomputer. I learned BASIC and 6502 assembly, because basically that's all that was available to me, and if it went too slowly in BASIC, your only other option was assembly, so that's what you did.
I've been saying for a long time that part of the reason I disagree with so many more modern programmers (read: younger) is because they can't possibly get the experience that I grew up with. Yet you seem to have managed. I'm impressed. I don't think there are very many people who understand the innards all the way down. And I think (because of course this is how it happened to me) you're better off learning the primal parts and learning how the layers are built on top of it, than learning the modern c++ or java or whatever and then first learning how it's actually all implemented.
But I am biased. :-)
I too took a course in college that sounds similar to yours. This one explained everything starting at the transistor, to gates, to flip flops to memory to accumulators to microcode. And I already knew everything from machine code up, so there was brief period of my life where I could explain how everything worked from the transistor up to WebSphere (which was the most absurd software at the time I realized this.)
I think you are very lucky and in a very rare position for having been able to learn what you did the way you did when you did. Good on you, not many people are so lucky.
I don't know if I feel more fulfilled for understanding what's going on, but I do like understanding it, just so I can think through problems and understand why some optimizations are worth it and some are not for example.

I can see that it's simply two sides of the same coin though, you can pay for your memory management sins in one way or another but the universe will collect on them for you..

100% What nobody seems to realize is that rust doesn't really solve any problems for you, it just moves them somewhere else. The guy I know who was the first rust fanboy when I met him used to say "If it compiles it works." What he really meant was, if it compiles, you won't get a segfault. But you're still going to have logic flaws, failure cases you didn't foresee and didn't accommodate, etc. And like you say, if you're not scared of programs and are willing to understand how your program works, rust buys you nothing but heartache, because you have to fight with the compiler just to get to a place where you can continue to write your software and move on and get the next thing done. Sure you will encounter memory problems at some point in your non rust programs, but again only to me, it is worth it, because I'm going to want to understand my entire program anyway, rust isn't saving me anything, and it's just hurting me.
By now I must be repeating myself, I will stop. :-)

if I just need speed I should probably use rust.

If you just need speed and aren't doing any threading or anything async, you should probably use rust :-)

For a long time I had a lot of good things to say about rust, and I still do. It is the most type strict and rigid compiler I've ever seen. The compiler warns me about all sorts of things other languages can only hope the linter will pick up.
I think that's great, that's the way it should be. If rust has a [non-borrow checker] flaw, it is that it includes a isize and usize type. To me, this was a flaw in c/c++ java and every other language that has an integer type who's actual runtime size depends on the architecture the program is running on. Remember I come from assembly land where everything is a distinct number of bits or bytes, none of this "sometimes it's 16 bits, sometimes it's 32 bits, but you can't tell just from looking at the source code." I will never in a million years understand why that seems like a good idea to anybody. This is a computer we're talking about here. It doesn't do wishy washy things when it feels like it, it does exactly what you tell it to. So if you want to count to 10, you use an 8 bit, 16 or go crazy and use a 32 bit integer type. And if you want to get the length of a really large file, you use a 64 bit uint. Why would you ever want to leave that to the architecture of the machine you're running on. That means a perfectly good program that builds on my raspberry pi will fail miserably on my windows laptop. It makes no sense. But that's another one of my peeves, I'll stop now.
But except for a few common things like that, rust really is great from a concrete know-what's-going-on-because-you-have-to-specify-it point of view. But (in my opinion only) the borrow checker is so annoying, c++ is still a better overall option.

I'm really confident now that I could learn it or any other language worth learning,

A long time ago I sat down and made a list of all the programming languages I'd ever written at least one non-trivial program in. I think there's like 22 of them now. And I can tell you one thing from all that varied experience: they're all the same. The all have loops, variables, and expressions and if statements. They call them something else, and the syntax is a little different, but it's all the same stuff. I'm sure APL is more different, and there are probably more exceptions to my experience, all of my 22 are common use languages, but you're absolutely right, you can learn any of them easily, including, rust, they're all the same.

This would be exhausting if that feeling weren't so overshadowed by the sheer comedy of it.

Funny you mention that. It was only last week that somebody told me there's React on the server. I'm trying to tease out what that actually means, I probably don't want to know. But yes, it is all rather comical.

There are more people looking around trying to be clever coming up with new ways to do things

... new ways to do THE SAME things. I still think (and I believe this is where our discussion started, speaking of pendulums) that it's time to move on to The Next Thing. Coming up with a new clever way to do the same thing gets us nowhere.

I've been telling people so for a long time, now they call me a cranky old man, and they're right. But someday they'll be old too and THEN they'll understand. :-)

@Phlosioneer
Copy link

If rust has a [non-borrow checker] flaw, it is that it includes a isize and usize type.

Just want to chime in: these types are absolutely necessary, and couldn't be removed from any reasonable programming language that includes pointers. They are the bare minimum requirements to interact with pointers.

You say number sizes shouldn't vary from computer to computer. But pointer sizes vary, and sometimes a single computer has multiple incompatible pointer sizes! (Not just from segmentation modes, either!)

Without a usize, it would be unsound to store a pointer into any numeric type. Without an isize, it would be unsound to store a relative offset between arbitrary pointers. Think about the following pointer math: (very rough pseudo code)

// start, middle are pointers
let offset = (middle as __) - (start as __)
offset *= 2;
let end = ((start as __) + offset) as pointer;

What type of integer is __? Let's address the size issue first. It's tempting to say "u64/i64 should always be sound", but it won't. The offset*2 operation may make it out of bounds for the pointer type's size. What about sign; is it signed or unsigned? If it's always unsigned, then the above subtraction must panic if middle < start. If it's always singed, then the upper half of the available address space cannot be accessed, which is extremely bad for embedded systems, or even some 32bit systems.

To do it soundly, you'd need to rewrite this math once for every pointer size, and choose the code block depending on your compilation target. At that point, you've just recreated the effect of the pointer-sized integer type, except now the surrounding code gets dragged in. And god forbid if you need to store it or return it.

Beyond just being theoretically unsound, it would be unreasonable in practice too. It would force everyone to assume the worst case scenario at all times: u64 everywhere, in every library. Then you're going to be doing math on u64's even on 32-bit or 16-bit systems.

I'm in agreement with you that it's absurd to use types with changing sizes in general. And it's absurd to have varying size integer types if there are no pointers. It only makes sense to use pointer-sized integers specifically for pointer related tasks. And that's exactly what usize and isize do. At least in the standard library, the presence of usize or isize means that pointer math has been or will be done on that number at some point.


Also, Java integers are and have always been fixed size. An int is the same no matter what computer, VM implementation, or Java version you use.

@Phlosioneer
Copy link

Phlosioneer commented Feb 7, 2024

They call them something else, and the syntax is a little different, but it's all the same stuff.

If you want to break out of the usual C way of doing things, check out:

  • Haskell or Lisp. Functional programming languages. Loops are not a language construct, and variables exist but not in the usual sense of assignment to a named location. Pedants would object to grouping LISP and Haskell together, the underlying machinery is very very different. But the end result is roughly the same.
  • Prolog, Verilog, and other DSL's (domain specific languages). Languages that are turing complete, could do anything you wanted, but are extremely specialized for a particular task to the point of being unrecognizable.
  • Forth. Extremely funky little language, designed specifically to use an extremely tiny compiler and interpreter (within 800 bytes combined). Technically lacks the concepts of if/then/else and for/while at the language level, though they've been re-implemented as functions ("words").
  • Tcl, QuartzComposer. Data flow languages. Hard to describe briefly, but they're very different from normal C style structure.

And some of these are extremely common. The number verilog and its successor systemVerilog are industry standard for chip designers and manufacturers, embedded systems engineers, and electrical engineers. OCaml, a functional programming language derived from Haskell, is a very effective pick for some server applications due to parallelism and efficient thread management. Various data flow languages have made it into multiple popular tools, though they're rarely used in more than one program. Unity and Blender both use them to some degree. And Lisp-derivatives were an ideal choice for an embedded scripting language for a while. Emacs is the most popular one still alive today. However, Lua took over its niche by being even easier and even more efficient. And then JavaScript took the niche from Lua now that efficiency doesn't matter as much as easier documentation & learning curve.

@unphased
Copy link

unphased commented Feb 7, 2024

"this is how we've always done it" and it just takes a while for the new thing to permeate through the entire industry

I'll say, "this is how we've always done it" seemingly couldn't be more true. I was just reading this note written by Dijkstra in ... 1970! https://www.cs.utexas.edu/users/EWD/transcriptions/EWD02xx/EWD288.html

Now that is a remarkable timestamp, I'll note unix epoch time itself starts in 1970. And even back then, he was complaining of a similar software quality problem in the industry, no doubt a very different industry from now. Just seems like more evidence that this is kinda what we get when humans are subject to the nature of computation. Back then he was fighting the good fight trying to stop folks from using Goto. Nowadays... not quite so simple.

@unphased
Copy link

unphased commented Feb 7, 2024

if it went too slowly in BASIC, your only other option was assembly, so that's what you did.

As a nerd in middle and high school there is a surprising amount of relevance to the whole BASIC and asm situation as you see, this was a special time, the iphone wasn't announced until I was just about going off to college (and I didn't get a full iphone till a year or two later, I was rocking a first gen ipod touch in my first year of college). So during high school what I did (idk about the other kids lol) was scour the net for texas instruments calculator games. They were written in BASIC or they were written in assembly! Now as a discerning student I had a Ti-89 which was a huge step above the Ti-83 since the 89 was sporting a higher resolution screen and a motorola 68k processor. The computer algebra system built in for solving equations was really quite impressive, this calculator certainly was impressive compared to the Zilog Z80-sporting Ti-83. I understand the Z80 to be a contemporary of the 6502. It's really interesting such ancient technology persisted in that space as a sort of time capsule. I would not be surprised if the Ti-83 calculator is still a common sight in classrooms even today. It's a bit of a curious thing where the slow ass processor has a bit of special charm as it creates a kind of animation as it paints graphs going as fast as it could go running what I could only presume is not terribly unoptimized code given all the decades they had available to polish it.

And obviously the asm written games ran better. That goes without saying. I recall writing my own texas holdem heads up game to run on the calculator and i have to assume i did it in basic. Memory is fuzzy, i guess I must have done that on a Ti-83 before i got my Ti-89 and quickly forgot it existed.

@unphased
Copy link

unphased commented Feb 7, 2024

most type strict and rigid compiler I've ever seen.

It's good. And as @MrTact pointed out above

tons of static analyzers to try and find places where you've gone wrong.

You are literally describing how the Rust compiler works.

A large stack of static analyzers is clearly the way to go. I've been coding in typescript lately and sure it took me a while to even get around to discovering the lint rules that are available. Most recently I enabled a large package of eslint rules (its funny, I believe my eslint rules are currently executed by biome, which is a rust implementation!) that leverage typescript type information, allowing them to be dramatically more extensive, and I'm regularly hovering around having 1000 lint errors in my project and fiddling with the settings deciding which classes of issues I'm going to consider addressing and which to just leave there as suggestions.

It begs the question, then, if rust compilation consists of a large stack of static analyzers on top of the core compilation process, then for someone not afraid of segfaults like you and I, it would be more ideal if we could get rust in a flavor where the static analyzers got pulled out so we can write rust in unsafe mode without the intentional ergonomic reductions and treat the memory safety static analysis with an opt-in paradigm. I know I would be much more likely to take up the language if this is how it were laid out!

@unphased
Copy link

unphased commented Feb 7, 2024

integer word size

I don't think I'm an expert in this area so I got some high level answers on this with the aid of AI, what it brought up are that even with systems languages like rust the designers' hands are tied due to ABI compatibility and such concerns in addition to memory and performance efficiency, which emulation certainly could heavily impact as well. Also with rust apparently you are free to choose between i32/i64/isize for example and as long as arch dependent types are limited in use to the areas that need them it appears like there shouldn't be a problem. It seems frustrating that the only improvement we got over C++ in this area is to bring the not-really-core-looking e.g. uint32_t/uint64_t types into u32/u64 being bona fide basic types. But I guess for practical reasons banishing the arch specific type was going to cause more problems than it would solve.

The other thing about this is that for primary computing, 32 bit architectures, even ARM ones, are rapidly becoming irrelevant. But the embedded world is another matter I suppose.

Edit: I wrote that before reading @Phlosioneer 's very insightful response, thank you for that.

@nixomose
Copy link

it would be more ideal if we could get rust in a flavor where the static analyzers got pulled out so we can write rust in unsafe mode without the intentional ergonomic reductions and treat the memory safety static analysis with an opt-in paradigm.

So... I call this "c++ without feeling the need to use all of the latest doo-dads."
I dunno, without the borrow checker, (and more default strictness I guess) I'm not sure I see any net value in rust. You can get c++ to be pervasively and annoyingly pedantic if you turn ALL the warnings on. I'm sure it would make an amusing phd thesis to compare the correctness/cleanness/whatever you call it, of a rust program with no errors or warnings, and a gcc c++ program with no errors or warnings if all the warning switches were on.

@unphased
Copy link

We already beat the topic to death, what it boils down to is if you really care about your software you're gonna need to crank up the heat on it with testing, and you'd better be using computers to assist with that work, it's 2024 already. That would be by applying lots of static analysis while you write the code so you get feedback on it before you even execute it, and also via automated testing to execute the code extensively. That coverrs the failure cases you're able to predict, and then using fuzzers and tools along those lines to break into the space of issues you didn't/couldn't have thought of. Looking for a panacea like "if it compiles it's safe" is just that, a panacea.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests