I found that the quip about how the document that they are sharing while having a conversation is not screensharing as we commonly understand it in modern parlance to be pretty illuminating. That modern screensharing is in several ways worse than a demo that came out decades ago is frankly amazing.
I've tried to recapture a bit of that early magic with https://coderpad.io/ (multiple cursors, realtime editing/execution of code), but I can only imagine how jaw dropping that demo must have been when it came out.
We're reinventing things over and over again because we haven't yet learned to design platforms that can evolve. Because the platforms can't evolve, every time we have a new platform we have to reinvent the applications of the old platforms.
The closest thing we ever got to "platforms that can evolve" were first Unix-like systems, that were too low level, limited and specific-machine-bound to be able to support the engelbartian applications that preceded them (probably this is why these applications and their concepts have been mostly forgotten, there was a "platform gap" over which they could not be ported so they had to wait for one more platform generation, but then people forgot all about them), then the Java platform (that should've been either the Lisp or the Smalltalk platform if market forces hadn't screwed up everything) and now the Web platform (you see, every platform moves up the abstraction ladder and away from the hardware - the annoying side effect is that you have to wait for the performance to catch up at every up step, like waiting 20 years to do in software in the 90s what you could do in hardware in the 70s, then waiting to do it on mobile devices etc.) - it's the closest that we ever got to having a "platform that can evolve", so we can gradually update the applications to evolve along with the platform, instead of porting them from one platform to another and loosing important pieces of them in the process...
Right. However, the platform is irrelevant if the programming is abstract enough, since you can essentially regenerate all previous programs for a new platform. Lisp can do this. Actually, any language can probably do this, but Lisp is said to be better suited for its IIRC introspective/reflective/code-data-indistinct (ie. ~typeless) purity.
The main shift for programmers is looking at code not as instructions (procedural) but as a combined cognitive/computational model, from which to assemble useful lines of thought (codepaths) on an ad-hoc/JIT basis.
It seems like the upshot is this: basically, code in Lisp. Use the code as a concrete, formalized model of abstract thought, don't enslave your cognitive capacities to attempting to model the computer's finite procedural logic.
Wish I had time to embark :) Maybe I'm getting there. Over 15 years I've noticed my procedural code has shrunk to tinier and tinier, more and more reusable and rigidly independent snippets. Perhaps the diving board for the great pool of Lisp draws closer faster than anticipated? :)
Well, you say that a flexible and high level enough language ca span all platforms and be itself a platform. The JVM is close to such a platform and Clojure is good lisp for it, so you "only" have to figure the abstractions right... the thing is that "use the code as a concrete, formalized model of abstract thought" is kinda hard-to-impossible, even in the most cross-platform language with the best meta-programming facilities, you'll end up with knowledge that (a) is not in the code (like everything about how Engelbart's prototypes interacted with humans to really augment their capabilities), which is not a problem in and of itself, but you also have (b) platform dependent knowledge and code that cannot be abstracted away, and (a) and (b) inevitably overlap making a total mess, because the interaction/usage related knowledge that's no in the code actually relates most to the platform specific code that's can't be abstracted well.
Anyway, my point is the platforms and programming languages are orthogonal: having the best cross-platform language in the world won't get you any closer to a "platform that can evolve well", and all programs will have platform specific code that won't benefit from the language at all.
And a "platform that can evolve well" can manage to do so even with crappy languages as defaults. Our best such platform, the Web, has Javascript, a language that I imagine will keep evolving because it has the weird characteristic of "you can always ducktape more stuff to it" (I imagine future versions of Javascript with optional advanced static typing, concurrency features ala Clojure, macros ala Lisp or Scala etc.). A better language will not necessarily be "a better language for the platform" ie "something that will help the platform evolve better" (the "platform" I'm referring to here is HTML/Javascript/JSON/REST/HTTP/etc.), it will solve a different set of problems and maybe it will or maybe it won't be adopted as the "platform's first class language".
If you like Lisp, work to make Parenscript or Clojure-script better (I hope you pick the 2nd and let CL die and rot... it taught us a lot, but it's time to let it rest... there's no chance in hell it will ever gain any traction again), as this is the only way we'll have a decent Lisp for the Web.
(b) platform dependent knowledge and code that cannot be abstracted away
Can you give an example of this platform specific code that's can't be abstracted well? I am less than convinced.
a "platform that can evolve well" can manage to do so even with crappy languages as defaults. Our best such platform, the Web
Sorry, I don't think that the web is a well evolving platform. I think it's a butt ugly duct-tape driven just-still-hobbling-along type of platform that is rife with security holes, trust issues, unneeded complexity, cross platform issues, unexpected but effective centralization of power, and many other issues. I am not sure how you can hold it up as a great example of a technical offering, really.
If you like Lisp, work to make Parenscript or Clojure-script better
Clojure-script seems pretty full-featured already. To be honest, though, I'm not sure that a Javascript VM is the place to implement complex code. GUI-related code is probably most of JS, and that is probably easier to generate from a model than rewrite with an additional layer of syntax abstraction in an effectively new language... particularly given the rate at which new JS related functionality appears.
orthogonal
I am going to come clean: I hate it when people use this word because I'm never quite sure what they mean. Often, I think, neither are they. To me, what you said seems to be "x and y are not the same". Others use it differently. I hereby wish we could just use familiar, regular-human language instead of latter-day tech hubbub. Ta.
ok, it got well offtopic but my prev comment was confusing indeed, so:
- you're right, that's what documentation is for, but people rarely document things like use cases and workflows and I've rarely seen comments like "this complex optimization is here because a delay longer than X ms is intolerable for workflows like ... or because otherwise you'll have inconsistent data when connectivity to external service Y breaks at the same time as service Z crashes ... and this actually happens very often because the user tends to do N and P at the same time and almost always ignores warning/error Q"
- "platform dependent knowledge and code that cannot be abstracted away": if you write a tablet app, you will need to take into consideration things like touch ui, impermanent internet connectivity and so on, if you consider the "platform" to be the devices. At other levels, if your "platform" is the browsers you'll have code that reacts to browser differences. If you use Ruby your platform will be the ruby interpreter and you'll also have code that is written somehow because of the platform limitation (gc problems, multithreading problems)
...I'm talking about the kinds of applications where 80% of the code is actually the UI code. If your balance differs, you can probably have good abstractions in your no-UI code. But for interface heavy applications (it doesn't have to be UI, it can be interfaces with lots of external web APIs), porting will always be a nightmare and you'll lose stuff at every porting and have to rediscover/reinvent it.
- "orthogonal" - my bad, I should have just said "they are independent" or "I don't think they really influence each other's evolution", I thought this word has already grown roots in the tech vocabulary and it's a nice engineery metaphor that I really like (I just think of perpendicular vectors and independent dimensions and since I'm more of a picture thinker that only later translates his thought in words it just seems intuitive to me - http://en.wikipedia.org/wiki/Orthogonality)
- the Web: I agree, it's "butt ugly duct-tape driven" indeed but that's the thing with things that evolve, you don't get to control the evolution, nobody really gets to do this, we all pull in different direction and some organic emergent behavior appears... hopefully it will crawl out of the current state to a less chaotic state that would enable more sane workflows for developers
I feel like there is a similar mismatch and almost backwards progress when you look at the concepts and tools behind smalltalk and the concepts most working programmers today reason with and are familiar with.
I think the huge focus on web development set CS back at least two decades in certain areas. The industry has been so intent on reinventing applications through the browser, it's lost ground in what can happen in areas like Engelbert's or Smalltalk or real hypertext.
I love Smalltalk, but it has some problems with it that are solved by the web browser.
Firstly, Smalltalk is a computing environment built around the concept of Personal Computing. You can open up and redefine the functionality of anything running on the system. This obviously makes it very prone to abuse. It was a system that was not built on the principles of running any random source code coming from a foreign party. The focus of the system was to elevate the concept of computing for the individual.
The web browser, on the other hand, has been built from the ground up on the principle that it would be running foreign and inherently untrustworthy code.
The web browser is approaching what Smalltalk was aiming for, however, with a focus on running code in a secure, sand-boxed environment. I don't feel like this is an inherently conscious trajectory but the fact that JavaScript has significant influences from Smalltalk-80 has probably helped to guide this path.
The advancements in networking and multimedia present in HTML5 are allowing for this sandboxed and restricted environment to more fully express it's influences from the Smalltalk systems.
Secondly, the open nature of the web browser and it's standards have encouraged adoption whereas the closed and proprietary nature of Smalltalk systems played a large role in it's "failure".
Designing by committee and standardization has it's pros and cons, but ultimately it promotes adoption and helps a system to survive in the overall ecosystem of computing systems.
I don't blame the web for the death of such amazing computing platforms as Smalltalk and Lisp machines, I blame market mechanics.
Microsoft succeeded with it's approach to Personal Computing because it focused on what the customers demanded for their short-term business goals, rather than some enlightened ethos of enabling humanity through better tooling. These corporate customers actually benefited from the opposite. Lack of control over a computing environment was seen as beneficial to the overall goals of the organization and is the antithesis of the goals of Smalltalk.
If you think of technology progress as a global simulated annealing problem, it is ok to step back a bit to get out of the local maxima. Web Dev may have taken us back 20-30 years in certain areas but once caught up, it can take us much further than the previous methods could have ever expected us to. The previous methods in this case being, compiled software that require specific devices/OSes and require a very high level of competency in all aspects of development.
Web solutions also require specific devices and web browsers, really; I can't think of a single jaw-dropping JS demo around here that didn't meet with several complaints related to it simply not working under certain browsers (and a lot more complaints about performance). Aye, if you put enough time and hacks into it, you get it to run on pretty much any browser. Until the next update, that is.
> require a very high level of competency in all aspects of development
Crude tools, development methods and principles are a surrogate for simplicity, not an actual manifestation of it. Barring the fact that you sacrifice performance, code maintainability and, to a high extent, security, I also think the question of productivity remains open. I find it stunning that young web developers are ecstatic about how their tools allow you to get from idea to result so quickly, when they are pretty much on-par with where Motif was in the mid-'90s. Not to mention the truckload of CSS hacks you need to make something that looks like a button (but without native looks) out of an anchor-that-really-shouldn't-be-a-button-but-it's-the-closest-thing-html-has-got. I don't think that can take us much further than compiled software could have taken us -- and slowly, but surely (with stuff like asm.js), it looks like a lot of people are rediscovering that.
I think we both know that's not what I was referring to. It's stuff like this one (PhoneGap, but really, it's like this almost everywhere): <a href='#' class="button header-button header-button-left">Back</a>. Unless I am mistaken and you can build the whole UI out of form elements (perhaps you can, but I'm not sure I've actually seen that done anywhere), that's simply not sufficient.
I took your comment to mean that no button element exists, but I suppose that was not your intention. But I did realize you were referring to links being abused as buttons.
I think the issue is simply that buttons tend to have more default styling you need to override, which is more likely to vary across platforms since the default style is generally chosen to mimic the system's native buttons. Whereas default styling for links is quite simple across all platforms. Besides that, there is no particular problem with using real button elements instead of link elements as far as I'm aware, just a matter of habit and convenience.
We had to focus on the Web as a platform because all other platforms failed to evolve, they reached their "nirvana point" where most things just worked, but they just stopped there. The twisted and contorted fucked up mess that we call the Web, that shares more with the Unix philosophy of "worse is better", has somehow managed to keep growing and evolving and will soon leave everything else in the dust, even if it will never get close to any "it mostly just works" point and it will never be better at any particular thing than anything else... it will just be "better overall" and it will keep growing, and businesses love growth so they will keep betting on it.
I think we need to concentrate on steering the Web's growth in the right direction, instead of bitching about how much it set us back (and I agree, it did, but it's a "platform cost" that was/is worth paying if you want to "ride the wave" instead of sinking with your favorite ship and then swimming back to the surface every time a new Smalltalk ship show up floating, then sinks and then a new one comes and so on... as an example, even choosing to develop a desktop app or a native iOS app instead of an HTML5/Javascript one with minimal backend is basically "riding a sinking ship", even if you'll make profit out of it and even if you'll deliver a better product to the customer).
I don't see that web platform's developers chose to reinvent anything. I would put the blame to Microsoft and other closed system makers of 80's and 90's which prevented development of cross platform applications for open content production and consuming. Thus everything OS's can do (and did even in 60's and 70's), needs to be re-implemented in the browsers, in aggreement. This too has taken a long time for precisely same reasons: companies waging war over platforms to lock users in, embracing and extending fledgling standards.
Web development tools are the best we can use these days if we want to reach the broadest audience, benefit from each others experience and further enchance the platform. Other tools make us choose one walled garden where development is rosy and ideas old, all the while hoping it stays alive long enough. I hope web can do what Engelbart's system did in 1968, but this time as open source, documented, findable and usable by every user with every device connected to the same truly global network.
I never had more than a passing interaction with a full blown Smalltalk-80 environment, but that was enough to make me look at e.g. any Java system and just feel sick to the stomach.
Shared documents in Google Drive work pretty well for this - multiple cursors, integrated chat (although text-only I believe). But yes, this is decades later; we should be light-years beyond this, but are instead only just rediscovering it.
I found this article by Bret enlightening. The expectation, though, that Engelbart's legacy could be communicated clearly by the New York Times in the obituary is too high.
The headline reads like this:
> Douglas C. Engelbart, Inventor of the Computer Mouse, Dies at 88
Imagine, then, if the headline read:
> Douglas C. Engelbart, Augmenter of Human Intellect, Dies at 88
The former is much clearer and understandable, especially taking into consideration the audience of the New York Times. Bret's article has the luxury of expounding and explaining to an audience that is sympathetic to his values; I myself was ignorant of Englebart's contributions as well as his personal ideology when it came to this life mission, and so greatly appreciate Bret's writing.
But, while the NYT article might seem untrue or ignorant to those who knew Engelbart and understood him, for most people, the appreciation for Engelbart comes out of the more 'mundane' things that were the side effects of his vision: the mouse, hypertext, etc. Most can appreciate the significance of these inventions, though the value of Engelbart's core works might escape them.
This is true, the NYT is not at fault here. Bret got his message about Engelbart out by fighting a target to show some contrast between perception and reality, which is a useful writing technique. A strawman, but a meaningful one.
This book, "What the Dormouse Said: How the Sixties Counter culture Shaped the Personal Computer Industry" is by the same guy who wrote the NYT obit with "the most facile interpretation of Engelbart" headline. It goes into far more depth about Engelbart's goals of augmenting human intellect.
I found this article to be very thought provoking with regards to Engelbart's work, but I can't help but read Bret's own worries in it. As if he himself fears being presented/remembered purely based on his inventions (Bret Victor, the inventor of the iPad interface) and not his goal (intends to invent the medium and representations in which the scientists, engineers, and artists of the next century will understand and create systems.)
As a side note: why can't we, the general public, appreciate people like Engelbart with the same fierceness before they are gone? I'm sure they'd like to know their life was incredibly meaningful.
> Easy-to-use computer systems, as we conventionally understand them, are not what Engelbart had in mind. You might be surprised to learn that he regards today’s one-size-fits-all GUI as a tragic outcome. That paradigm, he said in a talk at Accelerating Change 2004, has crippled our effort to augment human capability. High-performance tasks require high-performance user interfaces specially designed for those tasks. Instead of making every task lie on the Procrustean bed of the standard GUI, we should be inventing new, task-appropriate interfaces. No, they won’t work for everyone. Yes, they’ll require effort to learn. But in every domain there are some experts who will invest that effort in order to achieve greater mastery. We need to do more to empower those people.
The above cites Engelbart's 2004 talk "Large-Scale Collective IQ", so that is probably a good place to look as well.
> Alan Kay: ... If you have ever seen anybody use NLS [Engelbart's 1968 hypertext system for which he invented the mouse and chord key set] it is really marvelous cause you're kindof flying along through the stuff several commands a second and there's a complete different sense of what it means to interact than you have today. I characterize what we have today as a wonderful bike with training wheels on that nobody knows they are on so nobody is trying to take them off. I just feel like we're way way behind where we could have been if it weren't for the way commercialization turned out.
This is a good article but it's important to understand that vision doesn't necessarily trump implementation. Steve Jobs also had a vision of augmenting human intellect (one at a time rather than collective), but he's remembered for the things he built in pursuit of the goal, not the goal. Ted Nelson's vision for hypertext is also more transcendent than what we got from the world-wide web, but on examination it turns out that hos vision is flawed and unrealizable (in practice there is never a canonical reference for any text) while what we've actually built actually works.
The closest present-day analog to Engelbart's collaborative editing demo (edit: I mean specifically the document-editing portion, not the video etc.) is not screen-sharing, but modern collaborative editors (Google Docs, Etherpad), which do provide multiple cursors. Thus the screen-sharing analogy is a straw man, and the author hasn't really made his case against "drawing correspondences to our present-day systems" from Engelbart's visionary work. He may still be right, but I'd like to know why.
In Engelbart's original demo, the users can see each other, talk, collaboratively work on a document, and do together pretty much anything that the computer offers. The full OS, including the GUI, is intended to be multi-user.
Google Docs et al. only offer one aspect of that; all they aspire to do is solve the problem of "collaborative editing of a document". But say we need to work on something that involves listening to audio together or watch a video or use a third party program, we're SOL with Google Docs.
What Engelbart's vision aspired to do was to allow people to work together through computers, no matter what the work was. Document editing is a microscopic facet of that.
Google Docs et al. only offer one aspect of that; all they aspire to do is solve the problem of "collaborative editing of a document"
Yes, but it's just the aspect the OP criticizes screen-sharing (as a proxy for present-day technology) for missing. I take your point that these things could be better integrated, though.
I remember watching his NerdTV interview several years ago and being inspired by how principled and focused his work was. He wanted to apply his capacity toward solving interesting problems, and he did something about it.
As I began to read his thesis I was reminded powerfully of Conor McBride's Epigram system [1]. It's a very different kind of system, but the parity between a user's intent and the computer's ability to interact seamlessly with that intent is a powerful feature of Epigram.
[1] I can't seem to find the source anymore, but there are many papers like cs.ru.nl/~freek/courses/tt-2010/tvftl/epigram-notes.pdf
"'Douglas C. Engelbart, Inventor of the Computer Mouse, Dies at 88'
This is as if you found the person who invented writing, and credited them for inventing the pencil."
That's what obituary headlines do: connect the dead person's achievements in a concrete way to readers' lives.
Real-time collaborative text editing based on operational transform, like Google Wave and Etherpad, is probably a better analogy to Engelbart's system than screen sharing.
I've tried to recapture a bit of that early magic with https://coderpad.io/ (multiple cursors, realtime editing/execution of code), but I can only imagine how jaw dropping that demo must have been when it came out.