Meetup with Solana, Zcash, & Parity – Why Rust Is Ideal For Blockchain Development?
On August 16th, Solana’s CEO, Anatoly Yakovenko and CTO, Greg Fitzgerald teamed up with Zcash’s Crytographic Engineer, Sean Bowe and Parity’s Core Engineer, Jack Fransham to discuss why Rust is ideal for blockchain development.
- Solana is a high performance blockchain that uses a concept called Proof of History to achieve a cryptographically secure and trustless time source.
- Zcash is an alternative cryptocurrency with a decentralized blockchain that provides anonymity for its users and their transactions.
- Parity is a core blockchain infrastructure company that’s creating an open source creative commons that will enable people to create better institutions through technology.
Check out the video & raw transcript of the discussion panel below.
- Telegram: https://t.me/solanaio
- Solana Github: https://github.com/solana-labs
- Zcash Github: https://github.com/zcash/
- Parity Github: https://github.com/paritytech/
Raw Transcript of The Video:
Anatoly: Cool, so, my experience with Rust actually came at my last job. I was at Dropbox, and we worked on compression. And we, like, built this thing without using standard libraries with their own allocator. It was [inaudible 00:00:28] to run, like, in this little type-safe sandbox.
And that was really cool, but it didn’t really sink in to me until I started this project. And I spent two weeks writing C. I was, like, a good little C developer. I had 100% branch coverage and I was making pretty good progress until I needed some external libraries. And I was like, “Oh, man, I’m gonna have to download this stuff, and build it, and write makefiles.”
So, I decided to try Rust, and in a weekend, I was ahead of where I was. And that was the moment for me, like, “Holy shit, this is amazing.” Right? This is a language that is as fast as C, gives me all the modern type safety of Haskell, and it works, right?
So, it was kind of my awakening to Rust. So, like, do you guys have a moment in your life when you guys…like, it finally clicked that this is, like, the coolest thing ever?
Jack: I think for me the reason that Rust really clicked for me was not because of, like, the speed or anything. That came a lot later. Like, now I care a lot about writing formal Rust code, and, like, that’s something that I find really interesting in Rust.
But the thing that originally got me is because, before Rust, I was a C# developer. And C# is just nothing but runtime errors. Like, just whatever you do, just, like, you can’t even list the number of runtime errors that, like, the simplest operation can throw in C#. Because there’s, like, this dynamic type that it’s completely un-type-checked. And there’s like…you have weird string manipulation runtime stuff.
And it’s not like some languages. Like, people use these all the time, in C#. So, I was like…I was actually thinking, not like in programming in general because I had to deal with this bullshit all the time. And Rust was a welcome relief from bullshit because it is an impressively bullshit-free language, I’d say.
I love the ergonomics. Like you were saying, like not having to deal with these makefiles, and having this very ergonomic package management system, and, like, the type safety and the null safety, that kind of thing, that’s really, like, what clicked for me. And the speed was, like, a lot interesting coming later on.
Anatoly: And Sean?
Sean: Yeah, kind of the same thing for me, I guess. Speed was mostly a post hoc rationalization of why I work on Rust now. I can say, “Oh, well, it’s very fast, and it’s really easy to work with, and really easy to audit,” and so on.
But I think when I first got involved with Rust and just started learning it, I think that it made me like programming again. Because I had gotten into a situation where nothing was looking good and I didn’t wanna have to resort to writing Haskell or something in order to get [crosstalk 00:03:10].
Haskell is great.
Jack: Haskell has plenty of return bullshit.
Sean: But Rust just, kind of, combined my mental programming model, my procedural thinking, with just really good performance, and I really like the concepts.
Obviously, when I first got involved in Rust, it looked a lot different. It didn’t really have…like, it still had a garbage collector and all this other stuff inside of it. So, it’s changed over the years, and it’s become more elegant over time. I think that’s part of the reason why I like it so much, is that I saw it improve and become much better than it was originally.
And I guess if I saw it now, I’d just be like, “Oh, that’s a really nice language.” But there’s, like, lots of really nice languages coming out. I think it’s great that Rust got this momentum.
Greg: For me, I dabbled in a lot of different programming languages over the years. A lot of C, C++, Haskell, and Python, really. And every time…you know, I’d be using…I’ve tried my hardest to just choose only two languages on a project, to be able to say, “Okay, if I wanna do this high-level stuff or something, I’ll do it in Python, and if I wanna do, you know, functional type of stuff, I’ll do all that in Haskell, and if I really need to do low-level, high-performance, then I’ll switch it over to C++.”
And I just got burned so many times on those, the cross-language transitions. And it’s really, really tough. Especially when garbage collectors are involved, it’s really not composable that way.
And then if you…and then where C++, where it’s, you know, no garbage collector but also no safety, that’s also not very composable in that all of a sudden your Python program crashes, and it wasn’t supposed to be able to crash, right? It was, like, Python. That was supposed to be a property of it, but you did this plug-in in C++, and so now it does.
And so, yeah. So, for me, Rust is just a huge breath of fresh air in that I can look at it and say that I could just do this huge part of the whole software stack all in one language and not have to keep switching and dealing with those awful impedance mismatches.
Jack: I think the thing about C++ is not that it’s completely unsafe. It’s that it’s safe, but the safety is a complete lie. Like, it has all these things to make it, like, possible to write something that, like, is kind of safe, and, like, it looks safe, and, like, in the simple cases it is safe.
But, then, like, you try and do anything remotely complex with it and then, like…actually, not safe. That’s the worst thing. It’s something other than a safe that you don’t even notice without, like, 100% test coverage, and even then putting on, like, [crosstalk 00:06:00].
Greg: How about undefined behavior, though?
Greg: So, say you have an integer overflow in your program, and it just rolls over in the compiler that you’re using, but it says in the language definition that if you have an integer overflow, that’s undefined behavior and your compiler is allowed to do anything. Anything at all.
And maybe it doesn’t in the current compiler that you’re using because, you know, it’s a sensible thing to do, is to not crash your program because you have an integer overflow. And then you upgrade your compiler, and then, all of a sudden, it does crash. Or it crashes off in somebody else’s small library because they had, so, an integer overflow. Super annoying.
Jack: I mean, the Linux kernel, like, they disable the optimizations the rely on null pointers being undefined behavior because there was, like, a bug where they, like, created a pointer to a member of this random pointer that they were being passed, and then they checked that pointer for null.
And then, so, the compiler was like, “Oh, you just created a pointer to a member. That means it must not be null.” And then, so, they removed the null check, and then, like, you can just, like, bypass it and get, like, pseudo-, like, permissions with, like, very little effort. So, like, a lot of this undefined behavior is really [inaudible 00:07:09].
When I think about Rust, like, whereas C++ you can create these charlatan fraud abstractions, with Rust, like, you have this unsafe, and as long as the unsafe is correct according to the rules of Rust, everything building on the unsafe is also correct.
Sean: Yeah, so Rust actually has abstraction barriers that expose safety as a first-class concept because, at the language level, unsafe is a first-class concept.
Female Voice: Unknown caller.
Jack: I almost think that’s, like, my favorite concept about Rust. Because even Haskell, which has, like, amazing abilities to create abstractions, like, the thing it has to separate the unsafe and the safe things is the word “unsafe” in function names, like, “unsafe_perform_io,” which is basically just like, “Create nasal demons at this point in my brain.” It’s like… [crosstalk 00:08:02]
Greg: That’s a fact. And there’s one of those in the Prelude, so you pretty much can’t avoid as, even, a brand-new Haskell developer. If you’re gonna try to read a file, you’re gonna do an unsafe operation that’ll definitely bite you a few years later.
Anatoly: Yeah, Greg and I did a Haskell project over at [inaudible 00:08:22] but it bit us
Jack: I think, yeah, Haskell has lots of good things about it, but it’s…there are lots of ways to shoot yourself in the foot. I think, like, a lot of languages, more nowadays, are working on better ways to avoid you shooting yourself in the foot. Rust is one of these. But then, of course, there are some others that just, like, have all new and interesting ways to shoot yourself in the foot.
Jack: Well, not to shit on other languages. Not to make this, like, a cult thing.
Anatoly: [crosstalk 00:08:54] yeah, yeah.
Jack: Only Rust… like Rust is the only good language.
Anatoly: You guys did an awesome blog post, “Why Rust?” You guys did it.
Sean & Jack: Yeah, that’s right. Yeah, yeah, yeah, yeah.
Greg: Yeah, [crosstalk 00:09:08]
Jack: Was it Greg? I thought it was Dmitri who did that one? Do you guys remember? It was Dmitri? Yeah, okay.
Anatoly: So, I’m like, I’d like to know where, you think…like, Rust, for me, improved a lot of the ergonomics of programming, like, in the [inaudible 00:09:22]. Where do you think it can still, like, do better? Like, now’s your chance to shit on Rust.
Jack: So, the big one is, like, there’s a lot of things that, like, there’s no reason that Rust shouldn’t support doing things on the stack, but currently, just as a limitation to the language, you have to do it on the heap.
So, for example, there’s like…these things probably seem quite obscure if you are not, likely, deeply embedded in programming low-level Rust stuff, but, like, I feel like I hit them all the time. Which is…so, you can have constants that are, like, associated to, like, implementations of trade spawn types, which is already an obscure feature, but then you can’t use that as, like, the length of an array.
So, that means that, like, you end up with having to create…like, for example, you could have static strings that are, like, fixed-length on the stack, and then you concatenate them at compile time, but instead, you have to, like, do all this work at runtime.
And like, so, we do a lot of work in WebAssembly, and so, we have to, like, compress the size of a program as much as possible.
And so, a useful thing to be able to do is, for example, concatenate, like, static-associated strings together to create this long string at compile time, rather than having to do this work at runtime, or having all these different strings and then having it, like, concatenating them together, like, on the heap, so now we have to have an allocator. Better to do that in the compiler.
And there’s no actual reason that Rust shouldn’t be able to do this as far as, like, safety is concerned. It’s just that it’s supported by the language right now.
And of course, the other one is “infiltrate,” which is the one that I think a lot more people run into on a regular basis. Less of a niche one.
Greg: So, what about infiltrate? what did you wanna say there?
Jack: Right. So, a lot…so, I don’t think you can yet return to infiltrate from a function in a trait, for example.
Jack: So, like, if you have free functions or inherent functions, which is, like, functions that were only defined on one exact type…
Anatoly: Like an existential type in Haskell?
Jack: Exactly. Yeah, existential.
Greg: But what is an existential from a trait? Like, does that mean the traits could implement any…two different trait implementations could return two different iterates?
Jack: Yes, exactly. So…
Greg: Should they be able to?
Jack: Yeah. So…
Greg: Okay, all right.
Jack: The mental model you should have is that there would be, like, an associated type, and that associated type would be different in the different implementations, and, like…
Greg: You’d have to specify that in the trait. Okay.
Jack: Right. So, there’s two different designs. There’s one design where you have an explicit associated type and then you do “type whatever equals infiltrate,” or there’s the other one where it just has an implicit…like, it just creates an associated type for you.
But either way, that’s the mental model, but it’s not actually [inaudible 00:12:05]. And you run into that quite often. Which I thought…I find that lots of methods in Rust are not inherent. Most methods in Rust are trait methods.
Jack: And so, like, the fact that you can’t return infiltrate [inaudible 00:12:17] trait methods actually is a very common problem for you to run into if you wanna do stuff on a stack.
Sean: I agree with both of those, and I think that it’s kind of nice that, in terms of language features, that we’re really running into a curve that we don’t have. These particular two happen to also be, like, in development and we should see them, eventually.
Greg: Yeah. Oh, yeah. absolutely, Amen.
Sean: So, [crosstalk 00:12:43] good.
Greg: Yeah, absolutely. Amen. Rust developers must love you in that [crosstalk 00:12:49]. “He’s complaining about something that was released a month ago.”
Man 1: Well, that sounds like you were talking about constant errors there.
Together: Yeah, yes.
Man 1: This is a gross hack. So, to take [inaudible 00:13:00] and generic array.
Man 1: You can use this today.
Sean: It’s very painful.
Man 1: [inaudible 00:13:06]. It was very painful. I use them a lot.
Man 1: They’re really bad, but you can use them today, and they work with no [inaudible 00:13:15].
Jack: This is true. You can’t use it for strings, though, because for strings, the length, obviously, there’s [inaudible 00:13:23]
Man 1: You can’t use it for strings because strings are [inaudible 00:13:23]
Jack: I’m sorry?
Greg: The strings are what?
Man 1: The string itself is [inaudible 00:13:22] because it needs more [inaudible 00:13:24]
Jack: Yeah, right. Well, no, no. It’s like [inaudible 00:13:30]. Like, so if you have [crosstalk 00:13:33]…
Man 1: So you can’t…you could probably back a strn with a generic array if you used [crosstalk 00:13:40].
Jack: Yeah, you’d have to use [inaudible 00:13:41], exactly, yeah. But, so I have…like, I’ve got this repository of, like, concatenating strings at compile time using, like, extremely gross hacks and unsafe and constant functions.
But like, yeah, as I said, you can’t use it with traits. So, we had a design where we wanted to have, like, this trait implementation where you have, like, constant string, and then it just concatenates through at compile and, like, you have this very ergonomic way to create very efficient code. Which we originally were doing with a procedure macro, which is another thing that is, like, a feature that’s sort of upcoming.
Man 1: You should be able to use comms functions for that.
Jack: Yes, yes. Exactly. And there’s other comms [inaudible 00:14:22] things. So, like, we have, like, a hashing algorithm.
And I wrote a version of that runs at compile time by, instead of branching… You’re not allowed to branch, so I had all the different possible outcomes of that calculation in an array, in a constant array, and then I indexed into that array.
So, it calculates all of the possible outcomes and then chooses the one, like, with indexing, except it creates a 8 million percent slowdown compared to doing it normally, so it’s not really feasible to do it compile time because your compile times blow up over 8 million percent. Yeah.
Sean: That actually…not to step on the moderator, but it kind of makes me wonder, in Parity’s situation, how much unsafe code do you end up resorting to?
Jack: Right. Actually, not that much in Parity. And a lot of it is still not necessarily…there used to be, very early on, before even I joined, like, unsafe everywhere. Everything was, like, very much unsafe because, you know, we had to squeeze out the most performance, and .getunchecked is so much faster than [inaudible 00:15:25].
Because the C ++ devs who like, they loved this, this snake oil, this placebo of optimization. You know, [inaudible 00:15:38] like something seems like it should be fast.
Sean: “Oh, I don’t need to do a runtime check of that.”
Sean: “I don’t want a panic.”
Jack: Or, panic and slow. Panic and slow, you don’t wanna do that. You know, I mean, undefined behavior is very fast, to be fair. Can’t argue with that. But, like, one of the guys that worked for us, like, when he joined, he, like, removed a lot of the unsafe, and since then we’ve been removing more and more. And now, like, most of it is in safe.
There’s a few unsafe hacks in, like, the deepest bowels of the stuff that needs to be fast and actually can’t be optimized right now. For example, we have a hashing algorithm where we have to concatenate two arrays together, but instead, we put two arrays in a struct, which is repla C, and then we transmute that to an array. Because that is, like, legitimately, significantly faster.
So, like, yeah, but that’s…like, there’s not that much, like, [inaudible 00:16:16].
Anatoly: Right, but, like, who did all of that code in one day?
Jack: I wish it was one day.
Anatoly: It follows my first [inaudible 00:16:32], come see the Rust. I was like, “Oh, this is great. I’ll just guess, you know, unions from, like, buffers that I pull off the socket, right?”
Jack: Right, yeah.
Anatoly: Then be like, “Nope, this is gone.” So, that was a fun experience.
So, my next question is related to safety. Coming from, like, C and Haskell, I actually find that the modern day type systems don’t add that much over…just simple C is just typed out the same label as this.
I know, like, we use all these features a lot, but for 99% of the use cases, you can kinda of get away with that.
How do you guys, like, actually use these higher-level type abstractions that you see that can prove actual meaningful, like, parts of the code? Like, these types of abstractions are designed to give you, like, some certainties, some guarantees. Like, where do you guys see the impact of those?
Greg: I see them as convenient ways to start when you’re writing code that you maybe don’t know a whole lot about the library and just you’re kind of thinking of, like, a generic array, specifically. But that was really great to…you know, we wanted public keys and signatures in there, different lengths, and we didn’t want those types mixing at all.
And so we wrote those with generic array at first. And so we were taking advantage of this higher-level type system, and maybe it got me out of a jam or something a few times here and there.
But then, as I got, you know, more and more knowledge about what was all going on, then I would actually wrap those with very simple types and get rid of the generic array, actually. Just have a struct that you can use. And again, kind of getting to basic type checks.
So, yeah, I kind of see it as a useful safeguard for when you’re playing, you know? Actually, there’s a lot of things with, I guess, structure in art, really, right? Is that if you have a lot of structure, you can just sort of, like…you can just, sort of, change it around and do these kind of, like, you know, algebraic operations, and just sort of see what happens.
And that’s cool. So, that’s a really good use of types and structure, and if you actually know precisely what you want, then all of a sudden, maybe the types don’t quite match up to that nuance, the niche of what you really need [crosstalk 00:19:09].
Jack: I think what you’re getting at is the idea of type-driven development where, like, you build out the types first.
Greg: Yeah, we were talking about it just earlier.
Jack: Yeah, right exactly. So, [inaudible 00:19:10] it’s like that, that, like, concept comes from [inaudible 00:19:12], although…the name comes from [inaudible 00:19:14], not subject. The concept exists [inaudible 00:19:18]. But yeah.
Greg: Sounds like he’s ready for that Carrie Howard [SP] question there. He was on the plane, I think, when you threw that out there.
Jack: What’s this?
Sean: [inaudible 00:19:25]
Greg: He was joking about the asking questions up here about the Carrie Howard correspondence.
Jack: I don’t know [crosstalk 00:19:31]
Anatoly: Do you guys [inaudible 00:19:32]
using the Python [inaudible 00:19:33], you know, like, contest, where everybody says every other character? Oh, that’s [inaudible 00:19:51]
Jack: Yeah, so, about types. I feel like they’re really…okay, so, there’s the two arguments for, like, complex type systems. There’s the ergonomics argument and then there’s the safety argument.
The safety argument, I feel like you can get a lot just from strong versus weak type systems rather than from static versus dynamic. Like, for example, in C, if you multiply an int and a long together, I actually don’t know what you get. You get a long.
But, like, this is the point, right? There’s, like, all of these [inaudible 00:20:25] you can get in C by, like, multiplying unsigned integers by integers and, like, smaller integers by larger integers.
And there’s, like, a lot of C programming guidelines. Basically, it’s just telling you…
Anatoly: [crosstalk 00:20:32]
Jack: It’s like, a lot of C programming guidelines is, like, how to avoid the traps that this language lays down for you. Rather than telling you how to write good code, it’s just trying to learn how to avoid writing bad code. Whereas, like, a strong type system like Rust where you can’t multiply, like, integers of different sizes together, but this, like, would be, the answer would be not very, like, you know, intuitive. And so it just, like, tells you at compile time.
And then there’s the ergonomics argument where, for example, the, like, builder APIs where you have these, like, complex strings, like something.something.something, where you can, like, return, like, this very complex type by…
So say, for example, you want to have something where you can have this list of things to execute. You can, of course, have, like, vector inbox trade objects, but quite efficient.
So, instead you could have this, like, builder API that builds it up as a tuple, and then you return this, like, very complex [inaudible 00:21:36]. It has, like, this nested tuple of a list of all of the things you had to do. Like, the futures API does this. So, like, all of the things you want to do are in the type of the future. And so, if you had to write that, then it’d be extremely inconvenient.
And see, but that’s auto, but that only works with variable definition and it doesn’t work for so many other places. And…yeah. I mean, the bi-directional type influence makes that a lot easier to use. Yeah, those are what I see as being two arguments for a stronger type system.
Sean: For me, I guess, it’s just purely code re-use, really. That’s the thing that I don’t like about languages with less rich type systems, is just having to repeat yourself and having to invent the invariance of your code every single time you write it, and study and how it relates to other parts of code. It’s just I think you really need…you need generics.
A weak type system I could take if I still have generics. I can put up with it a little
Man 1: [crosstalk 00:22:44]
Jack: No, I guess it’s like… it’s having these abstractions while also being able to know, basically, that these things will still work, right? Like, in Python, you can do very complex things. But, like, there are generics and they take any time.
But, like, there’s no way to know that it actually works until you try it with at least a few types. Whereas in, like, something like Rust, you can know, like, before you cool it. You can know, like, the definition time that this thing will be correct rather than a [inaudible 00:23:24] time.
Anatoly: So, this, like, brings up the next point I wanted to make, or do you wanna…
Greg: I just kinda wanted to maybe defend alternatives to Rust that… That, we talked about earlier that the one thing that we really loved about coming to Rust is that there’s very few surprises. That, you know, what you wrote, that’s what happens, pretty much, in Rust. You know, not a lotta gotchas.
But specifically about generics, and I’m kind of going back to what I think…I believe it was Ian Lance Taylor had talked about from the Go language. So, Ian Lance Taylor developed the…what is the ELF linker, the super-fast one, the second one that came…
Man: The gold linker?
Sean: What’s that? The gold linker, yeah. Thank you. Who said that?
Man: [inaudible 00:24:24]
Greg: Thanks. I was just thinking, this guy Jack in the room is probably another guy who might know that.
Yeah, so, he wrote Go, and then I think he had considerable influence, maybe, on the Go language design about the generics specifically, saying that it kind of creates this awful modularity problem in that you create this generic and you haven’t really generated the implementation of that.
You know, because there’s an infinite amount of implementations of this generic, and you kind of don’t know where to put it. Should you put it in the person that uses it, that actually provides the type?
You know, and especially in the case of if you’re gonna create a shared object, right? You actually have a concrete thing you’re gonna distribute that, where should the implementation of these…it would have to be in the collar, the user of it, but that means you’re now duplicating this across everyone that instantiates the exact same generic.
And so I think their argument against it there was to say that we don’t know how to solve that problem well, so we’re just not gonna solve it right now.
Jack: Jack. That is Rust’s solution, is to not.
Greg: And Rust’s solution was maybe not…was to go for it and hope for the best. And, you know, you get this nice, concise code, but you still have that problem, that gotcha. So…
Jack: I mean, that’s really only a gotcha as far as, like, creating more space on the disk.
Jack: Which I feel like is a pretty good trade-off.
Greg: Which, your embedded systems developers…
Jack: Yes. Absolutely. This is definitely a [inaudible 00:25:45]. Because, like, [inaudible 00:25:47]
Greg: And if you’re using a systems language that doesn’t have a garbage collector, that…you know, that maybe you’d want that kind of control?
Jack: And, like, this is certainly the problem that we’ve been running into with our WebAssembly work. Because WebAssembly, especially on the blockchain, like, for smart contracts and stuff, it’s essentially the same as embedded development. A lot of the same restrictions.
And so we have run into this exact same of, like, yeah, [inaudible 00:26:09] produce a lot of code bloat. And, like, I feel like there is some, like, request for, like, non-modified generics.
So, you still use the generic thing, and then…but at compile time you specify that you want to use dynamic dispatch rather than static dispatch, rather than in the source code [inaudible 00:26:41] defining it, would be good for…
Greg: So, I think that’s kind of the Java solution, was to push it to the runtime. And then, you know, that conflicts with Rust zero [inaudible 00:26:48] abstraction driven at the [crosstalk 00:26:42]
Jack: I mean, you still choose. You can choose to have the zero-cost [crosstalk 00:26:55]
Greg: But that’s cool. Like, you can annotate it.
Jack: But I feel like there’s really only…you don’t really have much other choice. Like, either you generate monomorphized code for each one, or you generate one piece of code that, like, uses function pointers. There’s…I mean…
Greg: Or, you pass on the whole thing, like Go.
Jack: Yeah, exactly. Well, I mean…but Go’s solution is the same Rust’s solution except the monomorphizer is the person writing the [crosstalk 00:27:16]. It’s like [crosstalk 00:27:18].
Greg: Exactly, exactly.
Man 1: So, I believe Rust [inaudible 00:27:23] a compilation. You can identify any of these concrete types with a hash.
Greg: Okay. So you need, like, whole-program optimization [crosstalk 00:27:31]
Man 1: So, it’s more like, you know, like, a build server or something like that, right? [inaudible 00:27:25] hash…
Greg: So you can’t distribute a shared object on its own, and you have to postpone that decision?
Man 1: It’s more like the concrete types. Given a particular version of the compiler and given the generic parameters, right? Identifiable by hash.
So, it’s not like you [inaudible 00:27:56] address, [crosstalk 00:27:59] so long as everything’s the same across the board, inveriably. Same compiler version, same generic code, same generic parameters, I think the Rust compiler can map without a hash.
Jack: The way to maybe turn that into a shared object is, then, to use [crosstalk 00:28:17]
Man 1: Yeah, yeah. So, generally, you have a [crosstalk 00:28:21].
Anatoly: So, this sounds like a personal [inaudible 00:28:10] solution is write a linker, right? Like…
Greg: Yeah, [crosstalk 00:28:15]
Jack: Writing linkers is, like, infamously, unbelievably difficult. Like, linkers are just a very difficult program to write in general.
Greg: What do you think, Jack? Are they hard? And…
Jack: I’m somebody who’s never written a linker. Maybe they’re actually pissy, but [crosstalk 00:28:28]
Greg: No, it’s the black magic.
Jack: But I, from what I’ve read about them, they’re [crosstalk 00:28:30]
Greg: Its the black magic, for sure. And I think, you know, one of the spaces in-between in that design space is that you can do the link-time optimization.
So, like, in LLVM, you would, rather than compile all the way down to machine code, you would just compile down to LLVM bitcode, which is basically, you know, still kind of source code. And you pass that around in something that really looks just like a shared object.
Man 1: And they’re, like, near, right?
Greg: Yeah. And then it’s the loader, the runtime loader, that kind of sees that, “Hey, this isn’t an ELF object, this is the LLVM bitcode,” and so then could do those sorts of transformations. But that’s, like, pretty darn recent. That’s like the, maybe, five years. It’s where they’re kind of still working out the kinks there.
Jack: Do you know something crazy that LuaJIT does over Tangent? It’s that, if you call up a link library with LuaJIT, it will work out which functions are really small, and it will inline the assembly code from the dynamic object and then, like, rewrite it so that it’s no longer taking arguments off the stack like before.
Like, it’s, like, the craziest…like, it’s based on, like, a certain core competitor we’ve mentioned, or, like, in-line the assembly, the precompiled assembly code because Mike Pall is a crazy person.
Together: Yeah, yep.
Jack: He should be locked up.
Greg: Yeah, I know. That’s a lesson in software engineering, right? If you trust the one smartest person in the whole world, and then he decides to go do something else.
Jack: LuaJIT in its current state is still very impressive. If Mike Pall dies, we still have a very impressive project.
Anatoly: Yeah, yeah. My next question, again, regarding type safety. I’m just really impressed with…just blown, away, really. Rust solved memory safety and thread concurrency, which is…I didn’t even think was possible. The type system before it actually, like, used it. Like, holy crap, this is awesome.
But these days, like, especially our projects, we’re faced with writing programs that have an immediate financial impact if there’s a bug. So, like, I still haven’t seen, like, a clear proof of a property using, like, any proving language, including, like, you know, Interest [SP], Agda, or one of the recent proposed ones that actually demonstrate a property that I can trust to save my money, right? To prevent, like, a financial ruin in that contract. What do you guys think? Like, what are your thoughts on that? Do you see a type system solving that problem at all?
Jack: I mean, dependent types, [inaudible 00:31:09] somewhat. But, like, this is actually something that’s been on the minds of quite a few people at Parity recently because we are trying to push writing smart contracts [inaudible 00:31:17] for what…[inaudible 00:31:19] platform called Covan. [SP] It’s not…the technology is technically not called Covan, but I call it that. Don’t listen to me.
And I think that, like, what it is sort of coming to the forefront of, like, a possible method to solve this, in my head, is something like how SPARK versus Ada works. Where, like, there’s these contracts that you can write in Ada, and they’re enforced at runtime in Ada, and then SPARK will just basically go through your program and prove that none of these contracts will be violated at runtime, which includes stuff like out-of-bounds indexing.
So, it’s essentially it’s just like you write your whole program to panic if something is wrong, and then you run this program which proves that it will never panic. That’s essentially the idea. And this is how aircraft guidance systems are written, using Ada with SPARK, or SPARK that is built on top of Ada, however you wanna call it.
And I feel like Rust, which is quite a very similar language to Ada in many ways, could go down the same path sometime in the future and get a lot of the same guarantees that SPARK offers. What do you think?
Greg: So, you mean contract-oriented programming? Like, Eiffel-style?
Jack: Yes, right. It’s, like, kind of halfway between contract-oriented programming and dependent types.
Greg: Okay. And should you, the programmer, have a good idea of whether that’s going to be executed at runtime or statically? Have an intuition or…
Jack: So, the answer is that it will always…okay, so, you could write it to be always executed at the runtime, or you could write it to never be executed at the runtime. Like, you would pick one or the other.
And that’s how it works in Ada, at least. Like, it’s always runtime, so you’re just part of the regular Ada compiler. So then you can use one like this, a SPARK thing on it, it just proves that it will never, ever, ever panic. And then, if it will never, ever, ever panic, then it will just compile with [inaudible 00:33:18].
Sean: Kind of reminds me of this, kind of, trend that we have in high-assurance cryptography where we’re trying to automatically generate code that is formally proven to execute correctly. We often do this, and then we look at the code and we can’t read it. And so…
Sean: Well, I guess you could download the proof, the verifier that proves that the code is being generated, and then hopefully everything’s okay with that, and whether or not…whether that proving system is correct, doesn’t have bugs in or, or what version you should get of it, or whatever.
And it’s kind of an interesting alternative, is writing clean code and then proving that it’s secure instead. And you know, that’s important from a crypto perspective, perhaps, as well.
Jack: It should be said that, like, this SPARk code is…like, you couldn’t write a large-scale, like, full-program because it’s just incredibly restricted. Like, MISRA-C as well is another similar project which is, like, how NASA writes C, and it’s incredibly restricted. Like, you can’t use pointers, basically, I think, at all.
Sean: Do they use automated tools to check that the…
Jack: I don’t know. I don’t know. I think it’s just, like…
Sean: They just have a reviewer goes through the whole PDF to make sure that every single rule is being followed?
Jack: Certainly at one point that was the [inaudible 00:34:45], yeah.
Man: They can’t use C?
Jack: Well, I mean, this is the thing. They use C because, like, C is so…like, it has all this, like, legacy. And, like, the compilers are very trusted whereas something newer like Rust is, like, so untested that [inaudible 00:34:58]
Anatoly: If you have 100% branch coverage and static analysis versus will, like, will do a good job of identifying a lot of those problems.
Jack: Yeah, true.
Man 1: [inaudible 00:35:06] aliasing bugs.
Jack: Yeah, the aliasing bugs. But I tap into, like, just the runtime checks. Like, Rust’s index out-of-bounds things, and not enough…like, I was talking to a bloke, a guy at the European Space Agency who said that he wrote, like, rocket guiding code in Ada, not SPARK. And it was like, it had a bug in it where there could be an index out-of-bounds.
And Ada did exactly like…it maintained memory safety. It panicked. That was caught by some supervisor program which did exactly as it should, which is restart the program. But it got exactly the same input again, it panicked again, and then it just ended up in this loop of, like, runtime panics and then eventually, like, the rocket, like [inaudible 00:35:54] and died. [inaudible 00:35:54]
No one was killed. It was, like, an unmanned rocket. But, like, this is the kind of thing there I think that the just having panic if it fails is not enough. Like, you need to have very strong guarantees at compile time that you can, like, [inaudible 00:36:22].
Man: So, I think you’ve all talked about an integer overflow. So, does that mean that all of your Rust code in all three projects are all using the check signs, the check methods for doing arithmetic?
Sean: I don’t think I ever use unchecked. So, I mean, I’ve never…
Jack: But if you used checked with [crosstalk 00:36:27]
Man: I was just curious.
Greg[crosstalk 00:36:31] is, but it’s gonna be in a project where algorithms and data structures aren’t primary [inaudible 00:36:34]
Sean: No, not in the crypto stuff.
Jack: It’s like you always use, like dot-checked-add?
Sean: No, not checked-add.
Jack: But do you use…
Sean: Overflying [crosstalk 00:36:41]
Man: Yes. Right, right. You’d use specifically the ones with that kind of data. This is exactly how it works with us as well. Because most of the crypto stuff is, like, supposed to work [inaudible 00:36:56]
Greg: So, let me tell you, I have maybe a more pragmatic view compared to formal languages. And maybe this comes from my experience of all the work that Anatoly and I did in Haskell for Qualcomm, in that we had…you know, we built all this software, and it was…you know, we did a lot of stuff in the type system, and it worked really, really well except for that we were the only two humans in the group that knew what was going on.
Anatoly: Out of 30,000.
Greg: What’s that?
Anatoly: Out of 30,000.
Greg: There were a couple others across the 30,000, but in the 20 or 30 that we were working with. If anybody wanted a different feature, they would have to go through us. And that’s kind of a shame, right?
Or, you know, I’m sort of maybe a little bit more jaded too about…I looked at the proof of Quicksort in Agda. That just kind of proves that it will generate a sorted list. And I was thinking, help, you know, the average complexity of Quicksort is…what is it, is it N logN or something? No, no.
Jack: Couldn’t tell you top of my head.
Greg: But Mergesort and Timsort are, you know, a little better, in the best case and average case. I’m like, “Well, can I have those?” And they’re like, “Uh, no. This is what we have the formal proof for.”
So, I think if you’re gonna go down that route, you have to be really, really, really certain that you’ve got your algorithms and data structures just perfect and that that’s where you’re gonna keep ’em, and then you can take this next-level polish to say, “Okay, let’s try to find those very last bugs,” and that in the meantime, you’re really gonna get a lot better bang for the buck with… I think branch coverage is really just the best thing [inaudible 00:38:57] on that, is that you have a test and you’re checking all of the edge cases to ensure that all conditions are checked.
And yeah, there’s a couple, you know, like, Boolean type of issues where you can miss that, miss coverage there. But the ROI is just so good in Rust, and, like, all the rest of the safety guarantees in Rust that make that branch coverage meaningful, as opposed to C++ where you can look like you have 100% coverage and in fact you don’t at all because you have this undefined behavior [inaudible 00:39:32].
Sean: Another nice feature for generics is just writing tests that are generic over these concepts. And you can get into the algorithms and not have to worry about the objects themselves and how they’ll work, and move along, and kind of test-driven development’s helped a little bit more when you have access to types like that.
Jack: What do you think about tools like Quickcheck?
Greg: Quickcheck? So, I don’t like Quickcheck, actually.
Jack: Really? Okay. That’s not the answer I was expecting.
Greg: No. It’s…Quickcheck is written by John Hughes.
Jack: The Haskell version is.
Greg: Yeah, the Haskell version of. And it’s been ported to every language, basically, because I guess a lot of people do like it. It’s property-based checking. So if you write your test in this form that says…usually it’s something along the lines of, “If I do this operation, it will equal the result of this operation for all X.” And then it goes and generates all of the different Xes and tries to find the edge cases that’ll fail.
And my experience there is, well, there’s first, if you actually leave it in its naive form, you’re gonna keep generating all these random cases for it going through the exact same code path 90% of the time, and it just makes your test suite very long. And all you really want is to catch those edge cases, which is already identified by the branch metric most of the time.
So, again, coming back to, I think, that branch coverage versus line coverage, branch coverage is so incredibly meaningful that you can just have these couple very fast-running tests that gives you almost exactly the same guarantees that you would get from Quickcheck.
Jack: Right. I guess that’s what tools like AFL, like these fuzzing tools, they’re, like, similar to Quickcheck where they generate lots of data and then try and, like, convert that large amount of data to, like, a smaller set that causes the same bug.
Greg: Yeah, that’s super-cool.
Jack: Except that they…yeah, it’s amazing. Like, what’s the other one? There’s a Google one
Anatoly: Have you seen the one that generates…
Jack: [inaudible 00:41:35] okay.
Anatoly: …that generates the image?
Sean: So, it’s basically verifying a JPEG [crosstalk 00:41:42].
Jack: Oh, yes, I have seen that, yeah.
Anatoly: [crosstalk 00:41:43] a JPEG randomly. Like, effectively it figures out what the opcodes are in the beginning for it to be a valid JPEG, and…
Jack: We were fuzzing, like, the WASM implementations for, like, V8 and Spidermonkey, which is Google’s and Mozilla’s WASM interpreters — actually, no, WASM JIT compilers — to, like, work out whether or not we could use them with the blockchain.
And so we were fuzzing. It would, like, generate, like, these valid WASM files after a very long time, and we, like, found cases where it would…like, a small file would cause an unbelievably long compilation time which is, like, not…you can’t have that on the blockchain at all. Like, it needs to be linear…it either needs to be linear or we need to, like, use gas on the compilation process, which you don’t wanna do. So, it makes everything slow.
Greg: I was just thinking, it’s been…we’ve all been talking for about an hour, and I was just…
Greg: And I really wanna hear…yeah, there we go.
Man: [inaudible 00:42:41] branch coverage metric, I’m a little skeptical of branch coverage because it’s sort of a false certainty.
Man: You can get 99%, 100% of branch coverage, but your program can still [inaudible 00:43:12] at runtime just based on different inputs that might take the same control path.
Greg: Absolutely, yeah. Definitely incomplete.
Man: Yeah. So, the, sort of, state of the art nowadays in the C and C++ community with this sort of testing is combining fuzzing with sanitization.
Greg: Okay, yeah [crosstalk 00:43:10].
Man: Are you familiar with sanitizers?
Greg: Extremely. I actually worked on the LLVM team for three years and worked on porting the sanitizers to the [inaudible 00:43:18] backend. So, whaddya got?
Man: Yeah, so, it’s combining fuzzing techniques with either working with ASAN, or MSAN, or UBSAN, or TSAN, pick your flavor, one at a time. And I think that’s probably more important than strict branch coverage.
Greg: Okay, so, let me…I guess two things here. One, I’d say branch coverage better than line coverage is really a very key thing in that if…as a manager or a lead or something, you tell your team that you wanna have 90% code coverage or line coverage it’s really, really easy to achieve 90% line coverage.
But 90% branch coverage is actually very difficult and far more meaningful, that you’ve actually exercised 90% of the code path. So, from a management perspective, that’s, kind of, the view I’m looking at there. But from the sanitizer’s perspective, being in Rust…so, we wanna actually…
One of the…maybe the biggest reason why I’m into Rust, and maybe I should have explained it for the start, is that having worked on the address sanitizer, the thread sanitizer, the UB, the undefined behavior sanitizer, it really made me think about how much easier life would be to just code in Rust rather than be required to have 100% branch coverage and run all three of those tools and still not quite get the same level of guarantees.
And I worked in the embedded space where running the address sanitizer would cause a 3X memory overhead, and that was the good one. That was…right? You run the thread sanitizer and it’s kind of memory-mapping the entire address space. There’s a huge, huge amount of overhead there.
And then you look at, actually, an address sanitizer, that 3X overhead, what is that? It’s actually putting these landing pads on the outside of every memory location and saying that if you go and write to one of those memory locations by accident, then you must have had an out-of-bounds error. Well, what happens if you actually overshoot that thing? Then that is still missed.
And so Rust just, you know, catches all those at compile time, including the thread sanitizer ones, which is the tool that I just never wanted to run, honestly. So I really…
Man: I don’t blame you.
Jack: I’ve never used thread sanitizer. It’s, like, what’s [crosstalk 00:45:59]
Greg: It’s catching mainly races.
Greg: And, oh, it’s…so, I was in the embedded system, and I was using these tools to run Chrome. So, building the whole Chrome browser and running that in the embedded space.
So, that was a huge application, and so they even have the three times overhead of the address sanitizer. It was very difficult, and you’d actually end up just debugging problems related to having a bigger file instead of the problems that the address sanitizer was supposed to find. So, and then the thread sanitizer was just 10 times worse from there.
Man: I have a [inaudible 00:46:34]
Greg: Were you at the Boulder event?
Man: I was at Boulder [inaudible 00:46:38].
Greg: Okay, cool. You presented just before me.
Man: Yeah. So, if you’ve tried running ASAN on your [inaudible 00:47:04], you’re not even surprised by elements it finds.
Greg: You’ve done that, really?
Man: At least I have.
Greg: Yeah? [crosstalk 00:46:52]
Jack: [crosstalk 00:46:52] Unsafe Rust code, you mean?
Man: And a lot of popular [inaudible 00:46:53]
Sean: That is not unsafe code?
Man: It’s unsafe code.
Sean: Okay, yeah.
Jack: It’s also, like, a model…
Anatol: So is it part of…like, just warn you when you’re pulling in unsafe packages?
Jack: There is…so, you can do things on your [inaudible 00:47:13] to warn where there’s unsafe code. So you have to deliberately do “allow unsafe” in certain cases, but [inaudbile 00:47:17] for external.
Anatoly: We can fix it with grep.
Jack: I guess. Yes, I mean, that’s the beauty of Rust, right? You can’t fix that with grep in C++.
Man: Yeah. Yeah, big time.
Sean: You can make a makefile to do all that and [crosstalk 00:47:35].
Man: So, there’s [inaudible 00:47:34] unsafe, but you can’t transitively prevent unsafe or [inaudible 00:47:40]
Man: So, for Greg. On your slides, when you’re talking about Solana, you mentioned that to help you not duplicate data and duplicate work, you would shard. You would split work up. And I was wondering if you could comment on, kind of, just…and that’s a hard problem.
Greg: Yeah, yeah.
Man: I was kind of wondering if you could comment on that as much as you’re willing or comfortable to.
Man: And maybe how you use Rust to help you solve the problem.
Greg: Okay. So, what I had said is that there, that is the sledgehammer of scaling solutions, to do any sort of horizontal data partitioning, sharding being one of them, and that we have deliberately chosen at Solana not to go down that route because it adds a lot of software complexity and security implications.
So, every shard would need to have a separate set of mining resources watching it making sure that that shard is valid, and true of any kind of horizontal partitioning, so lightning network as well.
So, you’d have to understand that if you only have this fixed set of resources to verify the blockchain, that you’re now having to their add resources or split those resources up making, if you have a proof-of-work chain, for example, making it more vulnerable to a 51% attack.
And so we have chosen to not take on any optimization that might have implications on the security model, with the exception of choosing views. I guess proof of stake rather than proof of work, which of course is a very different security model, but it has the original premise that’s saying that you basically can’t do optimistic currency control without it.
Man: And I guess I misunderstood then.
Man: Would it be possible to go back to that picture and to the slides? Is that okay? There was that picture where you had…like, I thought you were sharding. Maybe I misunderstood something.
Greg: No, [inaudible 00:49:45]
Man: Yes. Yeah, you were talking about data, and I assumed you were…
Greg: No. So, I…yeah, [crosstalk 00:49:52]. I was really trying to keep this under 10 minutes. Failed so badly. And I really noticed that when I got to this slide.
So, these are all validators, except for that that top one is the leader node. And these guys are organized in a logical tree. And so, this is not sharding at all. This is just the leader needing to get the block out to all the validators, and so that way they could validate all the transactions and send their validation vote back up to the leader.
Man: So, you were distributing…you’re distributing the work, the validation?
Greg: No. Actually, not at all. So, we split up the block, but you don’t validate the split-up block. You actually…that’s what all these kind of crazy arrows are between the validators, is that…
So, if you follow this white line, for example, this is taking, say, half of this block, sending it to this validator, and it goes and it sends…it’s half over to that validator as well as down to this next level, which will send it to its peer.
And so, all the validators end up being able to reconstruct the original block and validate that full block.
Man: I see. I understand. Cool, thanks.
Greg: No problem.
Man: Sorry, quick question. How do you determine membership within the different groups?
Greg: That’s done over a gossip network. So, that’s separate. So there’s this, like, n-squared communication that’s happening, coordinating who’s actually participating in the network, and then they also organize this…form this logical tree to efficiently pass the data through this data plane.
Man: So, it’s self-organizing?
Man: Okay, interesting. Randomized every now and again, or…
Greg: Yeah, absolutely.
Greg: And it’s based on density, really. And you know, there’s kind of this feedback, too, if there’s…if these guys down here aren’t able to perform well enough, they’re either not able to validate it fast enough or they don’t have that gigabit connection, and we can get that 2/3 majority without them in subsecond finality, then they’ll get, kind of, boot down to the bottom there. That’s why it’s a tree.
Man: I’m a bit suspicious that no one’s complained about compile times yet. But my question is, I guess…
Greg: Was it compile times or link times? Sorry, Parity has got the massive, massive amount of crates, and so I can see how link time would be particularly problematic.
Jack: I think the link time is mainly a problem if it’s, like, incremental compilation, which we do [crosstalk 00:52:37].
Man: And at some point, our build dependency grasp wasn’t as optimal as it is now. But my question, I guess, is mainly directed at Sean, which is to what extent does Rust make it harder to write crypto code? Like, particularly with stuff like side-channel tags, and memory location, and just the compiler trying to be smart?
Sean: So, thankfully I’ve been able to avoid having to deal with side-channel attacks because the crypto plantation work that we’ve been doing, we’re replacing code that’s written in C++ that is full of very… And, so, I can just say, “Oh, well, it’s written in Rust, it’s better.”
Jack: It’s exactly as shit as the existing code.
Sean: It’s a lot more prettier, you know. But I think Rust does need cost generics so that you can, you know, write somewhere with the curb stuff and keep track of magnitudes, and len [SP] sizes, and things like that.
You have the field elements and stuff like that, in certain cases, when you’re trying to eke out performance. But I’ve found that you can get pretty good performance out of just the basic operations and not having to worry too much about doing cost generics, and sophisticated types, and stuff like that.
Yeah, actually, writing crypto code in Rust was pretty fun. In Zcash, we do a lot of multi-core stuff because, you know, we have to split these fast Fourier transforms off into multiple threads and do all these multi-exponentiations and stuff.
And it’s really nice to be able to just hand this off to libraries that can maximize the use of the user’s machine and not have to worry too much about memory safety issues and re-use, keep objects, [inaudible 00:54:34] allocated objects, and do all these kinds of things.
Very nice, actually, especially the multicore side. I think that’s something that…a lot of crypto work that’s done in general, usually, just focuses on single-core performance because they’re trying to make constant-timed things, but we’ve been able to write and explore crypto in Rust in the context of multicore stuff, and batching, and all that kind of stuff. And we’ve had a lot of success with it.
Anatoly: Do you use [inaudible 00:55:26]?
Sean: Actually, I have a long time ago, and I wasn’t happy with it at that time. Now I’m very happy with [inaudible 00:55:39], and I hope to move all of our stuff to [inaudible 00:55:43]. But right now we use a mixture of, like, scoped threads, and futures, and things like that. We’ll hopefully move it all into [inaudible 00:55:53].
Man: Do you mind if I ask you if the side-channel attacks may be a problem, it’s just that, like, you haven’t got around to making constant time? Or, like, [crosstalk 00:55:39].
Sean: It’s mostly a performance thing. So, we…I mean, the hardest, the most expensive part of constructing these proofs that we did in Zcash are these multi-exponentiations, and they’re over very large elliptic curve groups.
And so we need to use multi-exponentiation algorithms that we really fast and those with variable time than they are to…variable memory access, and cache stuff, and all that kind of thing.
So, it’ll be a challenge to approximate that performance while still having side-channel resistance, I think. So, it’s mostly a performance thing, just trying to keep the performance.
Because right now, I mean, we spent a couple of years getting our proofs down from 40 seconds down to, like, 2 seconds, but if we were to throw constant time into the mix, we would lose most of that, probably.
So, the interesting thing is, though, that we’ve kind of tackled this problem a little bit by rearranging the way our protocol is designed. So, these proofs require this expensive operation that needs to be parallelized, but we’ve redesigned the proofs and the way that it’s constructed so that the proofs don’t handle secrets for our particular construction.
Greg: Well, they handle…not secrets that would allow you to steal money from the person if you were to violate, do a side-channel attack or something like that. You could compromise their privacy with a side-channel attack because the proof has all this private context, but it doesn’t have the authorization to spend money, for example.
So, we’ve split out that authority in the construction in such a way that we aren’t as worried about the variable-time-ness of the zkstart proof generation code. It’s not as critical. It’s something to definitely tackle in the future, but… And I’ve started working on constant time versions of all this stuff, but I just don’t know if it’s gonna end up being too slow or now.
Anatoly: That’s interesting. Is this all on the Belman repo?
Man: The…which, the constant time stuff?
Man: No, the zkstart board.
Greg: Yeah. So, Belman, currently, is a library for doing zkstart, constructing zkstart groups. And we’re actually hoping to split it up so that I have a different library doing zkstarts, and Belman is mostly about doing circuit, arithmetic circuit stuff that we end up having to do in various proofing systems. Not just zkstart, but things like bulletproofs and so on.
So, yeah, so there is…Belman doesn’t have the elliptic curve cryptography implementation. It’s special curves that we use in [inaudible 00:58:48], that’s in different library called pairing [SP]. And yeah, we’re just splitting this all off into pieces that we can reuse and people can use.
I haven’t published my current work-in-progress constant time stuff because…actually, I have, and it’s full of bugs. If you go on my GitHub profile and you type “HSDF”, and then there’s this long, just, like, because I smashed my keyboard because I didn’t want anyone using this library. So, like, “What should I call this repository? ‘Do not use…’” but that’s where my current work-in-progress…
Anatoly: Shit [inaudible 00:59:21].
Man: Another one for Jack. So, you mentioned that you were doing some interesting stuff with WebAssembly in your clients. And WebAssembly is pretty new technology. The tooling is very limited. And it’s very nice that Rust can [inaudible 00:59:17] assembly, but there’s a lot of risk in there.
And can you talk about some of that risk, why you took it on, and some of the challenges?
Jack: Right. I definitely agree that, like, WASM is new technology and untested, in many ways, technology. The reason we chose it is because we needed a virtual machine target that was fast and that could be comparted from existing languages because we learned our lesson with Ethereum and Solidity.
Man: [crosstalk 01:00:09]
Jack: Sorry? You can make a decision as to whether or not that was a good idea. So, then we wanted to have something where, it would be, like, a sort of a rallying point for multiple different languages.
And really, like, building our own with that would be a fool’s errand. Really, the only thing that lives up to that would be either taking an existing architecture like ARM or X86 and then just emulating it on the platforms that aren’t ARM or X86.
The problem with those is that they’re quite hard to verify that they’re correct. Like, you’d have to run them inside of a sandbox to prevent them from blowing up, whereas WebAssembly is already built so that you can, like, very quickly, like, in linear time, scan over the whole of a WebAssembly module and check that if you compile that down to the machine code, that it won’t jump into a function that it doesn’t control, or that it won’t segfault, or that it won’t cause undefined behavior.
Like, these are all, like, things that are built into the design of WebAssembly already. So, like, it solves a lot of the problems that we already wanted solving for a blockchain virtual machine.
Anatoly: Do you guys run these without memory protection?
Jack: We do not. We run them with, like, complete memory protection right now. But, like, WebAssembly is built so that you would not necessarily need to do that.
At the moment, we don’t compile to machine code at all. We just interpret it. We actually compile it to this intermediate language which is faster to interpret. It’s kind of like a midway between compilation and interpretation.
Like, our guy, my boy Sergey, he implemented that. Very smart guy. There’s actually a different WebAssembly interpreter that does the same thing, of compiling to an intermediate step, which is, like, significantly faster than our interpreter and we have no idea why.
So, whoever, like, this bloke…one-week-old news that we’re, like, currently trying to work out how the hell [inaudible 01:02:26]. Yeah, we’re not combining to machine code.
I don’t know if that helps you. There is actually an article written by me, I’m gonna plug myself, that sort of explains why we chose WebAssembly. And it should be on “The Polka Dot Blog” any moment now.
It’s not quite there yet. but it’s currently on my blog. but I’m…I don’t know. I don’t know if I wanna [inaudible 01:02:52] with me. I can tell you about it later.
Man: Okay, cool.
Anatoly: Do you guys wanna hang out and drink beers and eat pizza?
Together: [crosstalk 01:02:39].
Anatoly: Cool. Any parting thoughts?
Jake: I mean, at this point I’ve been up for, like, 24 hours. So I don’t have any thoughts at all.
Greg: I just would say I am definitely a fan of Rust. Like I said, I have actually, like, studied programming languages pretty much my whole career.
And you know, actually, as I transition between languages, it’s usually because I’ve kind of, like, got in my head, “I’m gonna go build my own language this time.”
And then as I’m Googling that and reminding myself of that 100-point checklist of what it takes to actually launch your own programming language, I end up finding some language that already implements it.
And that’s kinda how I found Rust this time around. And now, right now, I’m not looking for another programming language. I’m really happy here. It’s, like, the sweet spot. It’s the language I think that I would have written myself. So…
Sean: So, now you can start running your own procedural macros.
Greg: Yeah, Well, [inaudible 01:04:05] macros. Yeah, so, like, good stuff.
Man: Quick question.
Man: So, I’ve been trying to convince everyone that I talk to about Rust, but one thing that keeps throwing back at me is failure on malloc.
Man: Yeah, can you [crosstalk 01:04:00].
Anatoly: [inaudible 01:04:01] without an allocator.
Man: What was that?
Anatoly: You can compile without an allocator.
Jack: You can still fail on stack overflow.
Jack: Like, there’s nothing to stop that.
Man: This is the one thing. I’m trying to convince one person, and he keeps throwing Lua, but…
Man: Yeah, yeah. [crosstalk 01:04:18]
Jack: [crosstalk 01:04:19] a Lua guy.
Jack: I mean, Lua doesn’t fail on malloc?
Sean: [crosstalk 01:04:24]
Man: Yeah, you know, really, I mean, [crosstalk 01:04:28]?
Anatoly: What is this [crosstalk 01:04:30]?
Man: Oh, okay, [crosstalk 01:04:31]
Man: [crosstalk 01:04:29] Rust in kernels [crosstalk 01:04:34]
Jack: Yeah, so, what you’re doing is you’re throwing it in boxes [crosstalk 01:04:37]…
Man: Yeah, I mean, I want it as [crosstalk 01:04:37]
Jack: …rather than returning an error [crosstalk 01:04:41].
Man: Yeah, a separate [crosstalk 01:04:40]
Jack: And it’s an uncaptured [crosstalk 01:04:41]
Man: [crosstalk 01:04:39]
Jack: Really? I think [crosstalk 01:04:45]
Greg: I am assuming that malloc will succeed.
Man: Is that the same for you two as well?
Jack: Yeah, we all assume that malloc will succeed, which is actually a problem. So we have this constantly running Parity node in our office.
And once every few weeks, because it’s quite a small box, it’s just really like some puppet box we’ve got verifying it, it will just, like, come up with, like, the Windows like “parity.exe has stopped working,” and it’s got all these, like, memory alloc things like…it is a genuine thing that I feel like Rust could deal with better. But it’s…they’re working on it. They are working on the allocator [inaudible 01:05:17].
Anatoly: This allocator library that we wrote for the compression stuff we did at DropBox, it’s open-source, you can Google for it, and that was…that basically solved that problem. We were able to run in constant time without any allocation, no system calls.
Greg: The thing I’m recalling in Lua is actually when you’re embedding Lua is that the C API actually has a runtime or hook that you can hook into and say that if it does fail, then you can actually go outside of Lua and go free up some memory [crosstalk 01:05:50]
Man: Yeah, yeah, yeah. At least you’re catching it rather than just [crosstalk 01:05:54] overkill.
Greg: Yeah, that’s kind of specific to a dynamically-typed programming language, and if you can kind of catch it and still do something useful outside of it, that… But I don’t know how that really [crosstalk 01:06:06]
Anatoly: [crosstalk 01:06:08] panic.
Man: Yeah. [crosstalk 01:06:09]
Man: You can do it. It’s open source.
Jack: If Parity aborts, we can have, like, a C++ [crosstalk 01:06:18] that we start.
Jack: I don’t know if this is like we’ve already gone too far, but, like, Zig actually handles this exact thing. It also handles that stack overflows. Like, in a way, it works [crosstalk 01:06:27].
This is all, like, [crosstalk 01:06:28] research, don’t use Zig.
Man: Don’t use it.
Jack: But it’s like, it does handle, like, the available allocations and [inaudible 01:06:38] stack overflows as well.
Man: So, if you’re doing this [crosstalk 01:06:42]
Man: Yes, but what I predict [crosstalk 01:07:09] is that basically, every single line in a single program can fail, but it won’t [crosstalk 01:06:49] so people will just start [crosstalk 01:06:51] process.
Jake: I don’t think that’s true. I don’t think that’s true. Like, you fail at allocating a frame. And, like, the way that you prevent, like, anything…like, any [crosstalk 01:07:04] can fail is it’s just, it becomes, like, not Turing-incomplete, because you can still infinitely loop, but, like, you don’t…you can’t infinitely allocate memory.
Like, there’s, like, a maximum amount of memory your program can ever allocate on the stack. That’s what [inaudible 01:07:49]. It still does have, like, panics and backtraces, but, like, it has a lot more tools to avoid them than Rust does.
Greg: Closing words, Sean?
Greg: You’re good, drink beer.
Together: [crosstalk 01:08:03]
Anatoly: Yeah. Thank you guys so much.