Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Narrowing the notion of a runtime in Rust (github.com/rust-lang)
208 points by heinrich5991 on Dec 19, 2014 | hide | past | favorite | 102 comments


They're getting rid of trying to do their own CPU dispatching. (Some people call this "lightweight threads" or "green threads"; Go's "goroutines" work that way.) That simplifies things considerably.

There's a compile-time optimization which does many of the things people want to do with such threads. If you have two tasks communicating over a channel or queue, only one task writes to the queue, only one task reads from the queue, and the maximum length of the queue is 1, the code can be compiled as a coroutine. Go may get that optimization at some point; it's been discussed.

(Things that have been learned about Go goroutines: 1) the above case is moderately common, 2) for "select", the most common case is N=1 alternatives, followed by N=2. N>2 is rare. Go has special case code for N=1, and the case for N>1 is very general and does a lot of dynamic allocation. A special case for N=2, which in practice is usually read-or-timeout/abort, would be useful.)


Support for green threads (and hence any manual CPU dispatching) were actually removed a month ago[18967] along with much of the generic infrastructure for mediating between 1:1 and M:N threads, the patch linked here just completes the removal of that infrastructure and switches things to be much closer to the raw OS in implementation and semantics.

[18967]: https://github.com/rust-lang/rust/pull/18967


What impact will big changes like these have on an eventual stable release of Rust? My team has been reading about Rust for a long time now, and we've tried it out, but we just can't deal with the constant change. We need a language that offers what Rust can offer, but we also need our code to still work without changes a few months after we write it. Earlier this year we were hearing that a stable release of Rust would be out before the end of 2014. But that clearly isn't going to happen. The latest I read was that it's now supposed to be ready sometime in March 2015 at the earliest. I understand that software schedules slip, and that it's important to get things done right the first time around whenever possible, but getting out a stable release of Rust just seems to be dragging on and on! I really, really hope that Rust 1 is available by this upcoming March, but if history is any indication, that may be unlikely to happen. Big changes like these don't bring me much comfort at all.


Most of this work is occurring precisely so it can be made stable. If you are curious about the Rust 1.0 cycle, go here http://blog.rust-lang.org/2014/12/12/1.0-Timeline.html

The main reason that it's taking a while is they want to get it right. There are a number of breaking changes in the pipeline that they want to get finalised, to improve the language. After that, they still have plenty of features in mind for post 1.0, but these will be added as to not break 1.0 code.


We are planning to have the 1.0-alpha around the new year, and then have at least two 6-week release cycle (hopefully no more) to really get everything stable. See http://blog.rust-lang.org/2014/12/12/1.0-Timeline.html for more info.


This looks to me to be an important and very promising change. I don't know how much they've had to compromise, but making this easily callable in from C (and hence, easily callable from other languages with C interop) makes it simple to embed rust in other languages and applications.

For example, I'd been looking at doing Rust development for Android; currently native (i.e. C/C++ apps) on Android get run through a Java wrapper; starting a Rust process with the Rust runtime active is incredibly complex. With this change, it should be trivial to have a basic Android activity that just calls a Rust main() and have everything go from there.


Does anyone know if this makes using Rust code on an Arduino easier? There's [https://github.com/jensnockert/dueboot] which only works on the Due, not the Uno.


I think the main blocking part there is getting LLVM working on AVR. I believe there is a fork of LLVM that does (https://github.com/sushihangover/llvm-avr), but you would have to merge that fork with the LLVM that Rust uses in order to run Rust on it, and it looks like the fork is a little out of date so it may take some work.

This particular likely doesn't affect running Rust on an Arduino at all, as what it is doing is removing the runtime for std, which contains many of the OS abstractions. You don't have an OS on an Arduino however, so std isn't particularly helpful there. Several months ago they already did what was necessary for that, factoring out much of the completely OS independent functionality into a "core" library (http://doc.rust-lang.org/core/), so you can opt out of using std and use "core" instead, plus add a little bit of platform specific glue, to run Rust on bare metal.


Too bad, I was hoping for easier embedded usage as well. I don't particularly like the Arduino abstractions, too much is hidden, and to get decent performance or ADC precision often requires dropping to low-level C or even assembler.

But the last commit to that fork was just 10 months ago though... might be tempted to have a look during the holidays.


Yeah, as said above, checkout zinc.rs. It runs on ARM MCUs though, but AVRs are becoming legacy anyway.


Aren't AVR and ARM different power envelopes, though? I thought ARM was for small computers and AVR is for things that want to be slightly more complicated than a gearbox.


Cortex-A series is for small computers (eg smartphones etc), Cortex-M is more of a contender for AVR, especially Cortex-M0 which are the smallest ARM cores. There is currently lot of activity around ARM and power-optimization in the industry, while AVRs are generally bit older design.

Just for fun, I picked a semi-random example of ARM mcu from digikey: EFM32ZG110F32[1]. Based on the datasheet it consumes about 8mW when running on full 24MHz speed. Meanwhile the AVR chip used in eg. Arduino Uno[2] consumes about 10mW when running at 8MHz. Of course at these ultra-low power levels there are quite a large amount of variability depending exactly what you are doing and how well your code is made.

[1] http://www.digikey.com/product-detail/en/EFM32ZG110F32-QFN24...

[2] Atmega 328p http://www.atmel.com/devices/atmega328p.aspx


There's a wide range of ARM processors, but you can maybe sort them into three tiers: small AVR-like scale (e.g. Cortex-M), the smartphone speed (e.g. Cortex-A8/9 what most people come into contact with), and edging it's way into the market - the server targeted ARMs.

The small, deeply embedded scale usually runs bare-metal or a small RTOS, while the next tier up often runs Linux. Now the small scale ARM is basically on the fast side (32bit proc) the very low end embedded cpus class. But it's often selected for everything but the most cost or power sensitive applications which tend to go for very small, power sipping 8bit cpus (e.g. MSP430s)


I love it. The day C and C++ die I'll be dancing on their grave and twerking on the tombstone.

Will be hard to find an ARM equivalent for the ultra low power MSPs though.


The new atmel D21 arm looks interesting - 40ua/mhz , and 0.9ua standbay with ram retention and rtc.


With rust-bindgen , you should be able to use vendor supplied HAL written in c quite easily.


Not what you want, but for Arduino safe systems programing one can in the mean time use

Ada, http://sourceforge.net/p/avr-ada/wiki/Home/

Pascal, http://www.mikroe.com/mikropascal/avr/


It's a less general language, but the Ivory language applies, as well. Ivory is a Haskell library that generates C code, so it can piggyback on Haskell's type checker to prove safety but still target a multitude of embedded systems by virtue of producing plain C.

http://ivorylang.org/ivory-introduction.html


Still limited by lack of an LLVM backend for AVR, doesn't make it any easier or harder.


Atmel really should sponsor AVR support for LLVM.


Totally unlikely. They are more likely to deprecate them in favour of Cortex-M's.


I see your point.

The cheapest "large memory" part on Mouser for the ARM arch with an A/D is a Freescale MKL02Z8VFG4 8K flash / 4k ram and a 12 bit A/D is 1.12 qty 1, just above 50 cents qty 500.


The chips Atmel is trying to move are not AVRs; it sounds like a money sink to push a harvard architecture in a von Neumann world.


ARM Cortex chips are better than Atmel in virtually every way. The only sometimes-disadvantage is that they are 3.3V rather than 5V (though that is often an advantage these days).

So I doubt many people will be motivated to work on this.


The dueboot project is rather dead, checkout zinc.rs :)


While it's true that this is a pretty big win, both were already supported. The second production deployment of Rust is a Ruby gem, written in Rust, and we test each commit against Android.




I'm more surprised to find Lovecraft quotes in Rust's source code:

https://github.com/rust-lang/rust/blob/6bdce25e155d846bb9252...


And in your Rust binaries, though this is a bug long since acknowledged :) https://github.com/rust-lang/rust/issues/13871

(And one of those isn't a Lovecraft quote, it's a Majora's Mask quote.)


This is going to make rust so much easier for me to use in serious projects. I have a ton of stuff in c++ that I can't realistically just stop using. I'd love to be able to start introducing rust into those code bases however and this will make things much less hassle.


Ok, I don't know rust at all, but often when I'm trying to understand a change, I look at how it affects the test cases.

Seeing the diff in the test cases should show how the API has changed, but I'm having a hard time reconciling the description of the changes with how the tests have changed.

There are several occurrences of changes like this:

    - spawn(move|| {
    + Thread::spawn(move|| {
         tx.send(i);
    - })
    + }).detach()
Thread methods have been namespaced under "Thread::" and now a "detach" method is appended to each call to spawn.

Can anyone shed some light?


In ancient Rust, the standard library attempted to provide an abstraction over the underlying threading model. The purpose of this abstraction, called a "task", was to allow libraries to make use of the standard concurrency constructs while allowing the consumers of those libraries to determine whether or not to use 1:1 threading (the typical platform thread model) or M:N threading (green threads), and it would all Just Work with no effort required.

This was quite an exciting idea at the time. However, in practice, it just didn't pan out: the compromises required to paper over the differences between 1:1 threading and M:N threading managed to almost entirely obliterate the respective advantages of both models.

As a result, Rust is in the process of removing the "tasks" abstraction and replacing it with a bog-standard 1:1 threading model (though it still benefits from all the typical memory-safe and concurrency-safe abstractions that Rust otherwise provides). All users of concurrency in the stdlib will now be using native threads, and it will be left to outside implementors to experiment with green threading libraries (see https://github.com/carllerche/mio for one example of such).

One of the consequences of this change is that the `std::task` module has been replaced with `std::thread`. Formerly, every Rust program imported the function `std::task::spawn` to use for spawning tasks. With the removal of `std::task`, the newly analogous function from `std::thread` has not been added to the prelude, and is therefore not imported by default in every Rust program. It could be added if such a thing were desired (you can see remarks to this effect in the comments on the PR itself (https://github.com/rust-lang/rust/pull/19654#issuecomment-66... and https://github.com/rust-lang/rust/pull/19654#issuecomment-66... )).


> This was quite an exciting idea at the time. However, in practice, it just didn't pan out: the compromises required to paper over the differences between 1:1 threading and M:N threading managed to almost entirely obliterate the respective advantages of both models.

I would love to read a longer article about this. M:N has quite an allure, but it never seems to reach mainstream. Erlang and Go provide it and they seem to be happy. Both languages do not focus on performance, though.


I can give you the short article:

M:N is great for maximising throughput, at the expense of fair scheduling. If you want any kind of fair scheduling, make sure you 'yield' often. Hope you're using a language that helps you with this.

If you want to avoid deadlock, don't get stuck in any infinite loops and avoid OS thread synchronisation, e.g. in third-party libraries. Oh, and if you're running on a NUMA architecture, make sure you resume tasks on the same thread that suspended it, unless you like needlessly consuming your processor interconnect's bandwidth.

On the plus side, it's a lot easier to write concurrent algorithms with green threads than when you have to farm work out to thread pools. It's even easier if your language was designed for it and can help you avoid the common pitfalls above. But if you can use a thread pool easily, you probably shouldn't be looking at M:N threading.


Java tried green thread since day one in 1995, and it never worked out well and was dropped long time ago (around 2000). Go did a good job with its gorountine at certain cost, for example embed Go library is much harder than C library, so Go application is typically a standalone static binary.

It is really great to see Rust drops the mixture of green and native threads, and reduce the runtime to minimum. It will make Rust stand out in many environment that no other languages would work well (besides C).


So now, will an "unhandled" panic on a thread abort the process? Or just get silently ignored until another thread stumbles upon a call that's poisoned?


Your RFC is still under discussion, and isn't implemented.


What a disappointment.


I like it. Platform threads should just be lightweight enough, that you don't have to patch on top of them.

And if you really need to M:N behaviour, it is still available as a library.


Agreed. Defaulting to native threading gives developers so much more freedom. After all, M:N threading is just implemented on native threads anyway. You can still do that if you want to. Rust isn't going to be a systems language used only for networking, so forcing a M:N threading model on it doesn't make sense.

This change was necessary to keep Rust as a general use systems programming language.


> Rust isn't going to be a systems language used only for networking, so forcing a M:N threading model on it doesn't make sense.

M:N threading decouples degree of concurrency from degree of parallelism, which is not desirable only for networking. And the old way wasn't about forcing M:N threading, it was about providing a common interface for concurrency independent of the relation of concurrency to parallelism in the backend implementation.

I can understand the idea that that abstraction may have been too expensive, but I can't get behind the idea that 1:1 threading as the only thing supported in the standard library somehow provides more freedom for developers.


Short-term it may look like there's less freedom since it's harder to use M:N threading (and 1:1 threading and M:N threading don't necessarily work perfectly together), but remember that the choice there is 1:1 threading with overhead and M:N threading with overhead.

Long term, we can have libraries that provide M:N threading with minimal overhead; and also libraries that provide 1:1 & M:N together like the old std (with the associated overhead).

That is, the programmer is getting more flexibility, just not right now (especially with Cargo, which makes it particularly easy to use external libraries). The team is trying to take a longer term view focusing std on safe interfaces to the OS primitives, which can then be used to build fancier/more abstract libraries more easily.


It's not like people are saying "Rust will never have M:N threading", it's that the current implementation didn't offer enough advantages for the disadvantages it brought. https://github.com/aturon/rfcs/blob/remove-runtime/active/00...

Having done some things with libgreen threads myself, they really didn't give anything over proper threads (especially without segmented stacks). So I am curious how this will evolve in the future.


The two different threading models have different costs/benifits. Native threads are much faster at context switching then green threads, however, native threads have a significant startup cost where as spawning green threads is nearly cost-free.


Actually, green threads are much faster at context switching than native threads. They also consume less heap space for (if it's growable) stack.

Native threads are natively supported by the OS, so managing them requires calls to the OS, the main reason why it's slow. However, it comes with a few advantages. The main advantage of native threads is that they are preemptive - they can be scheduled based on ticks (i.e. number of microseconds spent running the thread), even if they are in the middle of a loop. Also, they can be run on different CPUs, the OS takes care of all CPU flags/registers/etc., they are actual OS structures, ...


Do other OSes have an equivalent to User Mode Scheduling? It seems to allow you to get the best of both worlds, if your tasks are amenable to the model.

http://msdn.microsoft.com/en-us/library/windows/desktop/dd62...


Google supposedly has implemented something similar called user-level threads, which they talked about at the 2013 Linux Plumbers Conference. If I recall the video correctly, they talk about plans to open source it, but as far as I know there hasn't been much movement in that space yet:

http://www.linuxplumbersconf.org/2013/ocw/proposals/1653 https://www.youtube.com/watch?v=KXuZi9aeGTw


>Native threads are much faster at context switching then green threads

No, native threads have a much higher context switch overhead. The downside of green threads is the lack of parallelism, you can only use one CPU core. This is why decent languages multiplex green threads over OS threads (M:N). The mistake rust made was trying to make that distinction transparent.


Green threads also suck for other reasons. Async file I/O doesn't really work properly on Linux or OS X, so you have to use a thread pool for that, and you can't casually use any other library you'd otherwise want to use, either.


That's why a decent language multiplexes green threads over native threads so you can have your cake and eat it too. You still need the ability to choose to create a green thread or a native thread though, it is the idea of trying to make the two interchangeable that was a problem.


Quote from Wikipedia article on Green Threads: "Linux native threads have slightly better performance on I/O and context switching operations."

Some of the actual context switch in native threads is performed by the CPU, where as with green threads it is not. On top of this, the whole stack is copied upon a context switch.


Wikipedia is wrong and generally speaking you shouldn't trust it on CS matters. Or any other subject, really. Green thread context switching is much faster than native thread switching, as long as you don't do it badly. For example, on Linux, don't use setcontext/swapcontext/etc, because they do a system call to change the signal mask.


The paper[1] cited for that quote is specifically about PersonalJava 1.1.3 on Linux 2.2 on PowerPC. It may not be relevant to implementations from this century.

[1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.8.9...


Your quote lacks the context which makes it clear that it is not a general claim.

>Some of the actual context switch in native threads is performed by the CPU, where as with green threads it is not

Exactly. Native threads have a context switch, green threads don't.


Which languages multiplex green threads over OS threads in a non-transparent fashion, exactly?


Can you elaborate?


Fwiw I think we were all disappointed when all that work and attention on a promising mechanism for having it both ways ultimately came to nothing.


I concur. But I don't regret the time spent experimenting. Even negative results are valuable for advancing the state of the art.


Yeah, I'm just trying to describe the above poster as reasonable before other readers jump on the opportunity to white-out the comment for no good reason.


There is not much to say which has not already been said. I think it is sad that so much effort is going into projects which do not advance the state of the art. First Go, now Rust — these high-profile projects draw people’s attention away from actually innovative work.

As a core contributor, you will retort that Rust has different priorities than, say, Erlang, which is built around the concept of providing N:M processes (green threads) as a language primitive. My answer to that is, why not improve on Erlang, instead of settling for improving on C?


It's understandable that you think that I'm a core contributor, but I'm really just an informed community member. I have no vested stake in Rust, and I'm not afraid to criticize the core team when they make baffling decisions (example: the tone-deaf new syntax for fixed-length arrays).

And just because Rust has a focus on pragmatism doesn't mean that it's not blazing new territory. Neither Ada nor Cyclone can boast such extensively useful statically-safe references. And figuring out how to do unboxed closures memory-safely in a language without a garbage collector is, as far as I can tell, entirely novel (though you'd have to ask a PL researcher to be certain). I expect aspects of Rust to show up in all future systems programming languages, including future revisions of C++ (as well as brand-new efforts such as M#).


We already have a systems programming language with a garbage collector.

http://dl.acm.org/citation.cfm?id=367177.367199


It is unfortunate that you think Rust is not improving the state of the art: no other language is low-level and memory safe, with the same flexibility as Rust. Rust is certainly pushing the research envelope further than Go, and, anyway, it seems to me that improving on C is something very desirable given the recent spate of horrible bugs in C libraries, caused by memory safety violations (heartbleed etc).


Because we actually have to release something. And you can't be all things to all people. The best we can do is try to create a powerful, safe, extensible language, low overhead language with unsafe escape hatches. The hope is that this will allow for future advances in lightweight, task based concurrency in the form of libraries, as opposed to as baked-in language features.


The description notes the replacement of many uses of "tasks" with native OS threads. Tasks may map to threads or may be managed as green threads.

Threads unlike tasks must be detached if they are too be allowed to outlive their parent.


Does removing the runtime have any effect on the current effort to get Rust working on Emscripten?


Very simple and stripped-down Rust programs have already been demonstrated to work via Emscripten. I expect the work here would make it a little bit easier, but I don't know of anyone actively working on supporting this use case.

Last I heard, more advanced support hinged on Emscripten upgrading their version of LLVM, which was much more ancient than the bleeding-edge version that Rust uses.


I was wondering that myself. I remember it being given as the main blocker when someone asked about Emscripten, but that was over a year ago and a lot has changed since.


I'm not a low level guy, but this means we can have rust libs callable from ruby gems through C interop?

Isn't this fucking awesome?


Not only is this already possible, but the largest production deployment of Rust does exactly this (www.skylight.io).

Here's a dummy repo from steveklabnik to demonstrate the process: https://github.com/steveklabnik/rust_example (though the change in the OP is so now that I doubt this repo has yet been updated)


I compiled a Rust yesterday, ran `make`, and it worked, sooo...


You can already do this. I think this will just help make it easier and even more lightweight. I'd love to know the exact details, though.


Is this going to be part of the 1.0 release schedule, then? It seems ridiculously breaking when they're only 20 days from stable!

http://blog.rust-lang.org/2014/12/12/1.0-Timeline.html


Don't expect the January 9th alpha to signal the end of breaking changes. It merely represents a significant raising of the bar for breaking changes to be accepted. The subsequent betas will raise the bar even further.

In fact, as we approach the alpha, I expect several frantic last-minute breaking changes as people attempt to beat the buzzer.


Why did you decide to use hard deadline project management? It seems like Rust's ideals of correctness and fixing fundamental C/C++ flaws are more aligned with a "when it's done" release. On the other hand, the language needs a stable release to pick up many users. I'm guessing the latter was your motivator. And knowing that real world users will likely find problems with your design no matter how long you spent perfecting it. Just curious what went into the decision.


I didn't decide the deadline, I ain't a Rust dev. :P If anything I would have wanted the deadline to be January 20th, to mark the three-year anniversary of Rust 0.1.

But in an attempt to answer your question, I see Rust development as an application of simulated annealing (https://en.wikipedia.org/wiki/Simulated_annealing). In all honesty, we could probably spend another two years toying around and seeking a "more optimal" solution... but I personally agree with the devs that we have reached a point where the value of seeking backwards-incompatible solutions no longer justifies the time spent unable to cultivate a growing stable of third-party libraries.

That said, Rust won't be standing still post-1.0. Not even close. There are lots of language improvements that will be arriving backwards-compatibly in the coming year (e.g. improvements to dynamically-sized types, improvements to closure type inference, various improvements to the trait system like negative bounds, and so on.). Longer-term, I expect Rust to look into a design for higher-kinded types, which could eventually allow for generic monads in the vein of Haskell (though this should be considered a very far-future goal, perhaps even worthy of a Rust 2.0).


I think it's worth noting that 1.0 is a stable release, meaning that any 1.x future release will not introduce breaking changes. But that does not preclude them from eventually releasing a 2.0 which does break compatibility with 1.0.

That allows them to tell people that, yes, they can depend on the fact that the language and API will not change from, say, 1.2 to 1.3. And that they will continue to support it for the foreseeable future. But I imagine that eventually they will want to make breaking changes, as they certainly won't get everything completely right with 1.0. But that will be clearly marked as an entirely new version.


Except the std lib isn't going stable for 1.0. They have a system that marks which APIs are stable, but a lot of stuff is still unstable/experimental. So it's very possible to write a Rust 1.0 program that won't work on 1.x, if you used those parts if the stdlib. Fortunately, the compiler can warn you of this so you know what you're getting into.


The plan is to have those unstable parts disabled in the stable compiler, so that any program that works with rustc 1.0 will work with rustc 1.x etc. One can use those parts via the nightly branch. The plan is also to try to have much of the most useful parts of the stdlib stable (at least the core functionality).

More info:

- https://github.com/rust-lang/rfcs/pull/507

- http://blog.rust-lang.org/2014/12/12/1.0-Timeline.html


> But that does not preclude them from eventually releasing a 2.0 which does break compatibility with 1.0.

Which lets us relive the fun times of eg. Dlang 2.0 release or Py3k


I can't think of a popular language which has not had breaking releases. C, C++ and Java all have.


The 1.0 stable release will be pushed back as far as is deemed necessary. Ideally it will be in 14 weeks, but if there's still critical work to be done when we hit that point, we'll wait another 6 weeks. And another. And another. We're not just going to push out whatever we have.


This is becoming very problematic for those of us who want to use Rust, but who aren't in a position to be constantly updating code we wrote last week so that it works with whatever changes were made to Rust this week. We really, really, really NEED a stable release of Rust. Earlier in 2014 we were hearing that we'd have it by the end of 2014. Earlier this month we were hearing that we'd have it by the end of March 2015. Now we hear it may be much later than that. It's not at all encouraging!


The team is 100% focused on getting 1.0 out; I've been in the meetings, I've been sitting next to other core devs: they are putting their heart and soul into making sure 1.0 is soon and is high-quality.

Pushing it out before we think it is ready just because some people want to use it will result in stabilising a suboptimal language and so trade-off the long-term experience for the 5 years (or 5 decades?) for some short-term gratification.


And end up with bad design choices for the sake of speed? No thanks. I'd rather wait as long as needed to perfect the design. Rust won't succeed unless its benefits over C++ are dramatic, obvious, and uncontroversial. I'm fine using C++ for a few more years if it means Rust will be amazing and not just OK.


The date for the alpha is hard, but the final release of 1.0 is not. That seems like a good compromise.


It's 20 days until the 1.0 alpha -- from the timeline in that blog post, stable would be March 30 at the earliest.


20 days away from the 1.0.0-alpha; it'll be at least 12 more weeks before the actual 1.0.0 release.


So.. you want that change AFTER going stable or to not happen at all?


The merge may have occurred just yet, but the actual pull request and first draft of this change are several weeks old.


Now to remove unwinding and make it optional, or make fully recoverable exceptions... This seems like the biggest controversy in Rust, and it sounds like the plan is to just make unwinding optional, though last I read, that means two incompatible stdlibs and s need to compile everything in two different modes. Still it'd be really nice to not have unwinding when your program doesn't need it, but also be able to make apps that benefit from exceptions, like web servers.


Isn't unwinding a standard part of C/C++ ABIs these days? I would have thought that disabling it would cause more compatibility problems than supporting it.


Is it possible to have a threading model without an OS kernel? In other words, is it possible to package the rust runtime with an embedded execatable and run threaded software without the OS? Or does threading inherently require the OS to do the thread scheduling, etc.?


Plan 9's libthread [1] implements cooperative threading entirely in userspace library. This is apart of Plan 9's fork() [2] syscall which allows both forking processes and creating kernel-level threads.

Thanks to libthread being orthogonal to processes, it turned out to be easily portable.

[1] http://man.cat-v.org/plan_9/2/thread

[2] http://man.cat-v.org/plan_9/2/fork -- this is what linux' clone(3) syscall was modeled after.


It seems to me like the designers really couldn't make up their minds.

Do you want GC or not? Do you want lightweight threads or not? What level of runtime requirements do you want?

Its finally converging on something sane, but its taken a while.


Or, to put it another way, the design process happened in the open, and you got to see some perfectly normal things that just wouldn't usually be visible to you. (And even have some input into them, if you were so inclined.) I don't really think that deserves criticism.


To be fair, GC was always a big TODO item that became less and less important to implement until there were no go reasons to implement it in the language core (as you can implement it on top of the language), and removed from that list.

The other two are really part of the same coin. Having green threads involves having a runtime that has to be started. Deciding to remove the runtime means removing the green threads.


That's what always happens. You have a few goals that you won't compromise on (memory safety and same performance as C) and then you try to make decisions about things you CAN compromise on.


I've thought similar things when I've looked at Rust in the past. It seems to me that designers really wanted to make it nice from a PL theory perspective, (hence the focus on functional programming and a strong type system). I'm more partial to the Go style of language design, which focused on making it easy to do the things the designers did in practice.

I don't want to make it seem like I'm coming down too hard on Rust however, and I'm excited to see how everything works out once version 1.0 is released.


For the time I've been interested in rust, it's been quite focused on a very practical problem: safe, without requiring a GC.


> I've thought similar things when I've looked at Rust in the past. It seems to me that designers really wanted to make it nice from a PL theory perspective, (hence the focus on functional programming and a strong type system).

Do you have any alternatives to implementing a strong type system in order to give certain well-defined safety guarantees for anyone using the language? Any other alternative theories or implementation strategies that would be better?

> I'm more partial to the Go style of language design, which focused on making it easy to do the things the designers did in practice.

The designers/implementers wrote the first non-bootstrapped compiler in OCaml, a functional language. How's that for "things the designers did in practice"?

As far as doing what they were doing in systems programming in practice: the point of the project is to actually get away from their then heavy use of unsafe languages like C++. But then they took the concepts of smart pointers, ownership etc. (presumably inspired by best practices in C/++) and tried to make it bulletproof to use.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: