Tuesday, April 16, 2013

Markup is dead!

Recently I have been working on run-time adaptive widgets. Why? I've invested heavily in the Dojo Toolkit. The widget department of Dojo is all about singular, static widgets, which can be used like JavaFX, for example. They are fine for building desktop-like GUIs but, you guessed it, terrible for the mobile web. Sometimes we want a horizontal menu to be displayed vertically. I did search the web for other solutions, but they all seem to be markup-based in some way. These are great times for HTML5 and CSS3. That's what the buzz is all about. But I don't like it all that much. Writing markup is tedious and not DRY, platforms creating it are opaque and too specialist. I'd rather spend some time figuring out the best way to go about it. However, building a competitive tool from scratch has never been my forte. So I desperately cling to my current tool set (as we all do). I was going to set out creating stuff in Dojo, as I always do. Then this video came along:

http://fronteers.nl/congres/2011/sessions/web-components-and-model-driven-views-alex-russell

Suddenly – and any web developer watching it will feel the same – my javascript house of cards came tumbling down. This guy really has a point! We invested so much in javascript mainly because the HTML tool set is not complete, and probably never will be. We develop our apps (yes, my body tag is empty too), because "IE8 cannot do that", or HTML is not semantically rich enough to declare my beautiful smart widget... Maybe if we could just extend the markup to declare that widget in an elegant, ambiguous way so it can be observable for the rest of the world... But wait a minute. Isn't javascript capable of being semantic and observable? Is the current state of javascript not proof enough that there is indeed something not right with markup too, and perhaps never has been? So perhaps you ask: what could possible be wrong with markup? Well, my wiseguy snot-nose retort would be, do you really need it? Do you really need it when javascript is small enough, fast enough, malleable enough, semantic and observable enough? Do you need it when the choice would be up to you to either automate the web yourself or have it automated by some angled bracket constructs? And your users or customers would be happy with your web creation, no matter what? Ok, too many questions, time for some answers.

Obviously, there is a wish for DOM to integrate more tightly with JS at the moment. And the same can be said for SVG. Perhaps our woes will all be over when this has become so. The reason I'm not happy about Dojo is that there is no DOM at all, that is to say, it is abstracted to the point of being unambiguous and unobservable, and of course that is very bad. I should not have to tell my app what goes where and what size it has on a pixel level, or tell my widget what to do when horizontal orientation doesn't fit. I shouldn't have to express what is a Menu, what is a Tree, what is ContentPane, when in markup this meaning could just remain implicit, and becomes actualized through its use. Of course I used CamelCase to make clear that I am speaking of classes from object-oriented programming, which is not particularly blessed with implicitness... But to tell this in a declarative way seems to me to be less controllable in the end. And when there would be a declarative technique to do this automatically, it wouldn't necessarily be the way I want it. So developer control is an issue here, and I think that has been the main reason for javascript to take off the way it has. Apart from the question "do I need it", there is the question of "can I control it".

In the video, one semantic example surprised me. It was the use of <input type="..." />, which we all know represents a form element. Now, having worked with forms a lot, I can certainly say that this is such an exceptional case where classes have their right to exist! The distinction between types of user input is exactly as well-defined as a UI can ever be, and in almost all cases, we want to be as clear and concise as possible as to what a user is expected to enter. Moreover, we want the input to reflect a very strict data model, as to be valid. So why markup there? It's totally unambiguous, hence the domain of script. Ever needed a new form control or input type? I suspect you haven't been waiting for HTML5 for that, and you won't wait for it now when you like chosen.js, for example. But enough about this. I'm getting side-tracked.

When we incorporate "new" techniques (like proxies, multimethods, protocols, CLOS-like metaprogramming and aspect-oriented programming on the one hand, and semantic web-like relational data models on the other), it becomes clear that javascript will change and become more ambiguous and observable. I won't go into detail about those techniques, as they will need to be much more developed and normalized, and can take up multiple 600-page books to explain. But once they have, and a decent way of controlling graphical components arbitrarily has emerged from the current endeavors, I see the role of markup, at least in the way it's currently interpreted by browsers, as largely played out. However, I remain open to a more in-depth discussion about it's future.

Sunday, April 7, 2013

Decentralized Data Decrapitude

Tim O'Reilly: "Given that you put the web into the public domain... Are you a socialist?" Tim Berners-Lee: "LOL!"
Opening of "A Conversation with Tim Berners-Lee", Web 2.0 Summit 2009.

Why does Facebook have an approximate 1 billion users, Google+ 350 million and Diaspora, an open source alternative, only a meager 406,000? You could argue that Facebook started earlier and was at the time the better platform. But why was it the better platform? Because it used the infamous Social Graph. Supposedly developed by Philippe Bouzaglou in 2002, one of the early Facebook guys at Harvard University, it found its way into the hands of the Zuckerberg cabal. For further facts, watch the movie ;-) However, it wasn't the first attempt at crossing the boundaries of the Web as just a "bucket of text and links" (and the occasional image). As we know, the Semantic Web was thought up by Tim Berners-Lee, the very founding father of the aforementioned bucket, and the first article about it was published in the Scientific American in 2001. Of course, the Semantic Web was not yet Facebook technology in any way, but envisioned as a means to not only collect meaningful data, but to share it as well. These strategies, collection and sharing, are always paired, because when you start accumulating knowledge about persons, you can do the same for other entities, and vice versa. This kind of knowledge was what the Web then lacked: you may go to the library or a bookstore to get a book, because you somehow know about it, but sometimes you borrow a book from a friend, who thought you really ought to read it. Your friend knows about you and about the book, so that is why he or she recommended it to you. Interestingly, Bouzaglou now works on the collecting side of the coin, developing a semantic search engine, but unfortunately the demo currently throws an error.

In 2007 Tim Berners-Lee wanted to recoin the WWW as GGG: the Giant Global Graph. FOAF (short for Friend of a friend) was introduced as a decentralized format for describing persons and their relations. Actually, FOAF is not a format in itself, but an RDF ontology, a way of encoding human knowledge about a certain subject. RDF is an open standard, and in that way built upon the foundation of the WWW: a totally decentralized bucket of anything. But, for some reason, RDF failed, or rather, continues to fail, because it was never adopted by the likes of Facebook or Google. Facebook terms its public interface to its data the Open Graph, but that name is a bit of a hoax. It is just a "front door": behind this door is the actual internal structure, the real connection of all data Facebook has. Given permission, a developer can get a tiny bit out and use it to create his or her own application. This may look like standalone social data, but it cannot live outside the Facebook realm without loosing its meaning, even when converted to FOAF (which, after all, is possible to do). How can this be? This has to do with a fundamental (philosophical) problem, namely the Frame Problem. Data is only meaningful within a certain frame, and this of course also applies to the social graph. Before Facebook, people knew nothing of any "social graph", and only now that we have it we can denote it: Facebook became our frame for socially meaningful data, and despite its current decline in popularity, continues to be so. Google+ is "Google's Facebook", Diaspora is "an open source Facebook". FOAF will never become anything else than a way of serializing Facebook data. Berners-Lee, and we, the lesser gods, are merely considering the consequences of this reality after the fact ...

One of the main concerns of this Facebook framing of the world is that "our" data is in the hands of a commercial, corporate entity. Of course, Facebook can easily state that "You own all of the content and information you post on Facebook", when ownership in the sense of copyright is not a moneymaker for them. Furthermore the legal terms state that "you can control how it is shared through your privacy and application settings", which underlines that control lies with you, but only as far as the Facebook implementation goes. The real money is in the fact that only Facebook knows everything, and will allow third parties to commercially exploit a portion of this knowledge. Facebook has monopolized social data for the last couple of years, only to compete with the same model, as found in Google+. This very much resembles the way large corporations do business on the whole: stock-trading, driving up prices, offshoring, and more of that modern-day imperialism. As an aside, Google+ started out with a system called OpenSocial, which may or may not be like the hoax we encountered in Facebook, but the system was abandoned by Google last year. As far as I can tell it continues to fuel MySpace, but who cares anyway... The major players now are Facebook and Google, and while the popularity between them is equaling out, they drive the same strategy: data slavery. How do we want to counter this epidemic? Berners-Lee stated that "I express my network in a FOAF file, and that is a start of the revolution.", but now that seemed to be rather misguided. It wasn't the technology in the first place that caused a revolution, it was the concept in the hands of a bunch of Harvard misfits.

What concept do we have to strike back, if all we can come up with is that "it has to be open"? Not much. A dubious initiative operating givememydata.com wants you to get your data out of Facebook, and offers some formats, including a GraphViz file, that will at least allow you to display and explore your part of the network. Apart from its leftist motto it resembles a third-party application in every way, including the "U.S. commercial" extension to its domain. And again, what to do with your data when it's no longer "in the grid"? Port it to Diaspora? Well, it doesn't necessarily have to mean the same there, so you might have some work to do, provided you know what you're working with. Just sling your FOAF on the web, like TBL proposes you do? That means exposing your complete shopping profile to all kinds of potential harm, possibly worse than Facebook (on a shorter term at least). I don't know the answer yet, but I think it will take a lot more common knowledge about what social data is and what power it harbors. It will take a system of trust and authorization that is far more fine-grained than anything available, and that needs to be usable by laymen. But the most important thing is that people need to be a little more responsible when it concerns their interaction with the web. So for now, "open" and "social" will have to become "constrained" and "responsible", and that does sound a lot more boring than "#ifihadglass I'd share the world to my almost million followers"...

Update 2013/04/08: I posted the 2010 Web 2.0 conference interview with Mark Zuckerberg integrally. For the sake of completeness.

Update 2013/04/16: Shutting down the Open Knowledge Graph