Tuesday, September 3, 2013

A Real Deal Breaker

Internet Explorer has been doing so well since version 9 to keep up with evolving web standards. However, there still is quite a lot missing. One of the most sad gaps is the omission of preserve-3d. This basically means that the browser doesn't let you create three-dimensional objects that can be manipulated as such. Cool things like native 3D or parallax scrolling are out of the question with IE10.

So why isn't this supported (yet)? Are Microsoft developers just inept? Reluctant? Lazy? /me wonders...

To shield us from further disappointment, it's probably for the best to NEVER AGAIN develop for Internet Explorer. This ends here.


Thursday, August 8, 2013

The Bling Chain

JQuery is the most popular JavaScript toolkit. But I think it's inherently flawed, and there are much better alternatives.

JQuery is a toolkit for manipulating the Document Object Model (DOM), i.e. the live editable tree structure that the browser creates from a webpage in its memory. It is useful for doing stuff like this:


The $ (bling) function looks for an element (crotch) in the DOM, and applies some functionality (sniff) to it. Since that function just returns the same element (crotch), you can run it again through another function (scratch). As a matter of fact, every other function returns the same element, so you can create a whole chain of functions (the bling chain). This means that the toolkit extends the element with some custom behavior. Nothing wrong with learning an old dog some new tricks. However, JQuery adds all this functionality in one single dirty file. In fact, JQuery just fails on the nice script above, because it doesn't have any of the tricks in store that I desire.

I could write my own script to extend the basic functionality of JQuery, but that's all very ad hoc. Since I have no way of telling what behavior is currently in JQuery, I'm sure to run into trouble later. I have to manage my extensions in some way as to make them compatible, and this basically means writing a lot of dollar signs.

Alternatives are not always easy to learn, and the backing of JQuery by the community makes it hard to switch. However, DomScript, a stack-based DOM manipulating script built in Clojure, makes for a nice example of how a very clean alternative can be created. Unfortunately it cannot be used in the browser.

Acuna was also created to do JQuery-like DOM manipulation. It is also stack-based, and can be used in the browser. Acuna looks a lot like JQuery, and can even be customized to use the same function names, and because of this, you don't have to know how a stack-based language works to use it. But at the core it has a very powerful concept, and is more than just another toolkit. In addition, every function in Acuna is read from a separate module, so managing modules is very easy.

Bling Chainers beware the walk-in wardrobe scam: JQuery is a scurvy dog in designer clothes.

Tuesday, July 23, 2013

The corporate hijack of open source code

Big corporations like Monsanto somewhat resemble the Borg from Star Trek: you will be assimilated. Aside from the radical anti-globalist rhetoric of Vandana Shiva, patenting seeds is a very monstrous thing to do. I mean, I don't like the thought of people stealing my ideas to make money that would've otherwise gone my way, but the whole intellectual property deal already is such a slippery slope. To tinker with living organisms is bad enough, but to patent them is crossing the line. I don't want to be drawn into a political discussion: this entry serves a purpose. I'd like to address the corporate undercurrents in open source software.

I'm a proponent of open business. I like to share ideas as they emerge, at least with a small circle. In my last post I wanted to raise awareness for the problem I have with object-oriented programming. After writing it, and watching the interview with Vandana Shiva, I suddenly became aware that the same concept of patented seed doesn't only apply to closed-source software, but also to the whole object-oriented paradigm.

To summarize the find I made: classes are molds placed upon the world from which some species of things or beings take their properties and behavior. In a corporate environment this worldview is very fortunate: things and beings (if you'll allow the distinction) are born with certain a priori characteristics that fit well into the workflow. They have a fixed role in the process that is both efficient and controllable. The term "human resources" springs to mind. Things and beings are thought of as predestined in the course of action, and the less they deviate, the smoother the process will be. Also, think of the chain of inheritance in object-oriented programming: the corporate hierarchy delegates personal responsibility into non-existence. If the world could be modeled according to its role in a corporation, then the corporation would become all-powerful. Chuckle at it all you want, rest assured that this naive thought is in the mind of your CEO: it is the mind that produces classes.

The convergence of technologies Shiva refers to I call "the programmability of natural resources", and by now it's obvious that the object-oriented view made this convergence possible. Classes can be seen as seeds that are branded, and the exploitation of them in code is nothing short of software imperialism! People who use, for instance, Objective-C are forced to work this way. I can remember the day when ActionScript 3.0 for Flash was introduced: all my code needed a complete objective overhaul. And for what? JavaScript never had this rigid requirement... Perhaps because classes can be more easily patented? That would explain why the same thing is happening in the pharmaceutic industry, in bio-tech, in energy production, etc.

Lately lots of open source initiatives have come to the same point: forced objectivity. I won't argue that this approach isn't efficient or doesn't produce higher quality code: it does! I've been using a JavaScript toolkit that is object-oriented from the bottom up, and it's undeniably a very good one. But I'm considering getting off the bandwagon and reviewing my options. It feels kind of liberating and a bit scary too. You could argue: it's just a way of getting the job done. But is it? Perhaps this restraining order is all that prevents us from becoming very powerful and independent programmers. Or perhaps I'm just more paranoid than ever. You decide.

I'd finish this post with drawing relations between object-oriented programming and Darwinism, Kantian philosophy, conspiracy theory, but hey, that would just be pointing out the obvious, right?

Tuesday, April 16, 2013

Markup is dead!

Recently I have been working on run-time adaptive widgets. Why? I've invested heavily in the Dojo Toolkit. The widget department of Dojo is all about singular, static widgets, which can be used like JavaFX, for example. They are fine for building desktop-like GUIs but, you guessed it, terrible for the mobile web. Sometimes we want a horizontal menu to be displayed vertically. I did search the web for other solutions, but they all seem to be markup-based in some way. These are great times for HTML5 and CSS3. That's what the buzz is all about. But I don't like it all that much. Writing markup is tedious and not DRY, platforms creating it are opaque and too specialist. I'd rather spend some time figuring out the best way to go about it. However, building a competitive tool from scratch has never been my forte. So I desperately cling to my current tool set (as we all do). I was going to set out creating stuff in Dojo, as I always do. Then this video came along:


Suddenly – and any web developer watching it will feel the same – my javascript house of cards came tumbling down. This guy really has a point! We invested so much in javascript mainly because the HTML tool set is not complete, and probably never will be. We develop our apps (yes, my body tag is empty too), because "IE8 cannot do that", or HTML is not semantically rich enough to declare my beautiful smart widget... Maybe if we could just extend the markup to declare that widget in an elegant, ambiguous way so it can be observable for the rest of the world... But wait a minute. Isn't javascript capable of being semantic and observable? Is the current state of javascript not proof enough that there is indeed something not right with markup too, and perhaps never has been? So perhaps you ask: what could possible be wrong with markup? Well, my wiseguy snot-nose retort would be, do you really need it? Do you really need it when javascript is small enough, fast enough, malleable enough, semantic and observable enough? Do you need it when the choice would be up to you to either automate the web yourself or have it automated by some angled bracket constructs? And your users or customers would be happy with your web creation, no matter what? Ok, too many questions, time for some answers.

Obviously, there is a wish for DOM to integrate more tightly with JS at the moment. And the same can be said for SVG. Perhaps our woes will all be over when this has become so. The reason I'm not happy about Dojo is that there is no DOM at all, that is to say, it is abstracted to the point of being unambiguous and unobservable, and of course that is very bad. I should not have to tell my app what goes where and what size it has on a pixel level, or tell my widget what to do when horizontal orientation doesn't fit. I shouldn't have to express what is a Menu, what is a Tree, what is ContentPane, when in markup this meaning could just remain implicit, and becomes actualized through its use. Of course I used CamelCase to make clear that I am speaking of classes from object-oriented programming, which is not particularly blessed with implicitness... But to tell this in a declarative way seems to me to be less controllable in the end. And when there would be a declarative technique to do this automatically, it wouldn't necessarily be the way I want it. So developer control is an issue here, and I think that has been the main reason for javascript to take off the way it has. Apart from the question "do I need it", there is the question of "can I control it".

In the video, one semantic example surprised me. It was the use of <input type="..." />, which we all know represents a form element. Now, having worked with forms a lot, I can certainly say that this is such an exceptional case where classes have their right to exist! The distinction between types of user input is exactly as well-defined as a UI can ever be, and in almost all cases, we want to be as clear and concise as possible as to what a user is expected to enter. Moreover, we want the input to reflect a very strict data model, as to be valid. So why markup there? It's totally unambiguous, hence the domain of script. Ever needed a new form control or input type? I suspect you haven't been waiting for HTML5 for that, and you won't wait for it now when you like chosen.js, for example. But enough about this. I'm getting side-tracked.

When we incorporate "new" techniques (like proxies, multimethods, protocols, CLOS-like metaprogramming and aspect-oriented programming on the one hand, and semantic web-like relational data models on the other), it becomes clear that javascript will change and become more ambiguous and observable. I won't go into detail about those techniques, as they will need to be much more developed and normalized, and can take up multiple 600-page books to explain. But once they have, and a decent way of controlling graphical components arbitrarily has emerged from the current endeavors, I see the role of markup, at least in the way it's currently interpreted by browsers, as largely played out. However, I remain open to a more in-depth discussion about it's future.

Sunday, April 7, 2013

Decentralized Data Decrapitude

Tim O'Reilly: "Given that you put the web into the public domain... Are you a socialist?" Tim Berners-Lee: "LOL!"
Opening of "A Conversation with Tim Berners-Lee", Web 2.0 Summit 2009.

Why does Facebook have an approximate 1 billion users, Google+ 350 million and Diaspora, an open source alternative, only a meager 406,000? You could argue that Facebook started earlier and was at the time the better platform. But why was it the better platform? Because it used the infamous Social Graph. Supposedly developed by Philippe Bouzaglou in 2002, one of the early Facebook guys at Harvard University, it found its way into the hands of the Zuckerberg cabal. For further facts, watch the movie ;-) However, it wasn't the first attempt at crossing the boundaries of the Web as just a "bucket of text and links" (and the occasional image). As we know, the Semantic Web was thought up by Tim Berners-Lee, the very founding father of the aforementioned bucket, and the first article about it was published in the Scientific American in 2001. Of course, the Semantic Web was not yet Facebook technology in any way, but envisioned as a means to not only collect meaningful data, but to share it as well. These strategies, collection and sharing, are always paired, because when you start accumulating knowledge about persons, you can do the same for other entities, and vice versa. This kind of knowledge was what the Web then lacked: you may go to the library or a bookstore to get a book, because you somehow know about it, but sometimes you borrow a book from a friend, who thought you really ought to read it. Your friend knows about you and about the book, so that is why he or she recommended it to you. Interestingly, Bouzaglou now works on the collecting side of the coin, developing a semantic search engine, but unfortunately the demo currently throws an error.

In 2007 Tim Berners-Lee wanted to recoin the WWW as GGG: the Giant Global Graph. FOAF (short for Friend of a friend) was introduced as a decentralized format for describing persons and their relations. Actually, FOAF is not a format in itself, but an RDF ontology, a way of encoding human knowledge about a certain subject. RDF is an open standard, and in that way built upon the foundation of the WWW: a totally decentralized bucket of anything. But, for some reason, RDF failed, or rather, continues to fail, because it was never adopted by the likes of Facebook or Google. Facebook terms its public interface to its data the Open Graph, but that name is a bit of a hoax. It is just a "front door": behind this door is the actual internal structure, the real connection of all data Facebook has. Given permission, a developer can get a tiny bit out and use it to create his or her own application. This may look like standalone social data, but it cannot live outside the Facebook realm without loosing its meaning, even when converted to FOAF (which, after all, is possible to do). How can this be? This has to do with a fundamental (philosophical) problem, namely the Frame Problem. Data is only meaningful within a certain frame, and this of course also applies to the social graph. Before Facebook, people knew nothing of any "social graph", and only now that we have it we can denote it: Facebook became our frame for socially meaningful data, and despite its current decline in popularity, continues to be so. Google+ is "Google's Facebook", Diaspora is "an open source Facebook". FOAF will never become anything else than a way of serializing Facebook data. Berners-Lee, and we, the lesser gods, are merely considering the consequences of this reality after the fact ...

One of the main concerns of this Facebook framing of the world is that "our" data is in the hands of a commercial, corporate entity. Of course, Facebook can easily state that "You own all of the content and information you post on Facebook", when ownership in the sense of copyright is not a moneymaker for them. Furthermore the legal terms state that "you can control how it is shared through your privacy and application settings", which underlines that control lies with you, but only as far as the Facebook implementation goes. The real money is in the fact that only Facebook knows everything, and will allow third parties to commercially exploit a portion of this knowledge. Facebook has monopolized social data for the last couple of years, only to compete with the same model, as found in Google+. This very much resembles the way large corporations do business on the whole: stock-trading, driving up prices, offshoring, and more of that modern-day imperialism. As an aside, Google+ started out with a system called OpenSocial, which may or may not be like the hoax we encountered in Facebook, but the system was abandoned by Google last year. As far as I can tell it continues to fuel MySpace, but who cares anyway... The major players now are Facebook and Google, and while the popularity between them is equaling out, they drive the same strategy: data slavery. How do we want to counter this epidemic? Berners-Lee stated that "I express my network in a FOAF file, and that is a start of the revolution.", but now that seemed to be rather misguided. It wasn't the technology in the first place that caused a revolution, it was the concept in the hands of a bunch of Harvard misfits.

What concept do we have to strike back, if all we can come up with is that "it has to be open"? Not much. A dubious initiative operating givememydata.com wants you to get your data out of Facebook, and offers some formats, including a GraphViz file, that will at least allow you to display and explore your part of the network. Apart from its leftist motto it resembles a third-party application in every way, including the "U.S. commercial" extension to its domain. And again, what to do with your data when it's no longer "in the grid"? Port it to Diaspora? Well, it doesn't necessarily have to mean the same there, so you might have some work to do, provided you know what you're working with. Just sling your FOAF on the web, like TBL proposes you do? That means exposing your complete shopping profile to all kinds of potential harm, possibly worse than Facebook (on a shorter term at least). I don't know the answer yet, but I think it will take a lot more common knowledge about what social data is and what power it harbors. It will take a system of trust and authorization that is far more fine-grained than anything available, and that needs to be usable by laymen. But the most important thing is that people need to be a little more responsible when it concerns their interaction with the web. So for now, "open" and "social" will have to become "constrained" and "responsible", and that does sound a lot more boring than "#ifihadglass I'd share the world to my almost million followers"...

Update 2013/04/08: I posted the 2010 Web 2.0 conference interview with Mark Zuckerberg integrally. For the sake of completeness.

Update 2013/04/16: Shutting down the Open Knowledge Graph

Thursday, March 28, 2013

Beyond Inversion of Control

There is no doubt about it: the MVC pattern has taken over. In my view, it just has one "minor" flaw: tight coupling. Every time you insert a reference to another class, it's just gonna sit there forever until you, the developer, decide to take it out again. To ameliorate this, Inversion of Control was introduced. It uses dependency injection at runtime, so the control of the dependency is out of the hands of the developer. Some people seem to find it hard to understand IoC, but it's really that simple. Who or what is in control, now that the reference has become all soft and fuzzy? You've guessed it: your config.conf. Or your wysi.wig. Something horrible at least. Well, that's your prize for joining the MVC movement. But wait, wasn't there some way to explain to your boss that you're still a decent programmer? Yes, through reference! He or she is probably looking at all your LinkedIn endorsements! Good job! Cudos to you!

Catch my drift? There is not a problem to be solved in MVC, there are references to be made in the same way your boss knows you're the best (for that rate). He or she has some kind of relationship with you, or knows (of) someone who does. This makes you partly indispensable, at least for the moment. The same was true for your dependency when you coded it. It had good reason to be there, but was replaceable at some point. When you keep track of this in a static way in another part of your life, it is still just sitting there: not in your code this time, but yes, sitting and waiting, like the money in your bank account, waiting to be plucked from its cozy nest. It hasn't become any more fluid. Really. Back to MVC: in fact, you shouldn't model anything. Software is better off without any reference to humanity. It should, as they say, "just work". Right? Right.

The problem with MVC is that it approaches abstraction from the wrong end of the spectrum, namely data. I forgot the reference, but I'm quite sure it was Jan Lehnardt who mentions somewhere that data is "fluff users want to see" (go ahead, google it yourself). MVC starts with the M, which is a way of saying: you go boy, draw all that putrid slime your client put somewhere (probable in some unwieldy RDBMS) into your code and pee over it some more, THAT is what your hacking life here on earth revolves around. So why not put the C in front? You're in Control, right? And throw out the V while we're at it... you're a programmer, remember? V is for designers. And what about that M? Do we need it in there at all? We have all kinds of beautiful (relational) solutions for modeling data, right? Oh yeah! Talking about Inversion of Control! You were never meant to be in Control in the first place! So throw out that C too! Wooo! Go nuts! Tell me, what do you get?

Wednesday, February 27, 2013

Abandoning hope... and XForms

Sometimes a decision must be made. I have invested many hours in a standard that is quite complex and tedious to maintain, and that was great fun, but enough is enough.

Why did I invest in XForms when it took so much effort and time? Because I believe standards are good and good standards come from the W3C and the XML community. Also, I always had somewhat of an inferiority complex in IT development that I tried to compensate by using techniques supposedly invented for common folk, i.e. non-programmers. Both are bad reasons from where I now stand. I took up XForms because I was infected by the enthusiasm of a friend (who was even quicker to abandon it) about 3 years ago. At first I assumed it just used a model from which to infer a form. It turned out it was like a superset of HTML forms with options for displaying and validating data. This was largely taken up by HTML5, as we all know. The Model remains the core of XForms, but the W3C is slow to respond to innovations and the need for a better user experience, while the programming community is evolving rapidly and growing steadily. The needs and expectations for dumbed-down tools is shifting accordingly.

When it comes to the model of XForms, there is a gap with HTML5. The model separates the form elements from their respective types, constraints and representations. But wait, is there really a need to separate these? Why not have all types, constraints and representations in the document proper? Ah, because of re-usability. Well, to be honest, so far I never encountered a use case where I could actually reuse a model! It never mattered if I declared binds on the elements or in the model, and I can't quickly think of a use case where it should.

Back to my first intuition of generating a form from a model. I must have been quite stupid back then, because I was obviously thinking about a Schema... As it turned out, people have problems grasping the difference between a model and schema, and to be honest, so do I. If a model is not generic in any way, than that means it is just another document. I recently found out that this is the core problem that I encounter when working with XForms. Like HTML5, it is document-centric, not data-centric. That was my error.

The first time I thought of abandoning XForms started with a technical debate on the usage of a specific JavaScript toolkit on the betterFORM users mailing list. I was brooding on something all this while, but couldn't grasp the issue. Now I get it: the problem was not the choice of client toolkit for the job, but the problems that arise when implementing a document-centric solution in a data-centric environment. JavaScript has taken a leap forward since toolkits like Dojo opened the possibility to switch between declarative (document-centric) widgets to programmatic (data-centric) widgets. Since then a lot of patterns have emerged that deal with problems that arise when binding events and methods in a document. To be able to benefit from these solutions, you will have to do it the Dojo way. Clearly, this my-way-or-the-highway approach is not particularly friendly towards other document-centric solutions. HTML5 will allow for a standard to be developed in tandem with dojo/event and dojo/method, XForms probably won't.

At some point I made a decision to only use programmatic widgets. The power of JavaScript engines these days is enormous, and smart design allows for a seamless user experience. At this point the incommensurability with XForms became most apparent. My use case here is localization: a website that needs to have forms in two languages.

At first my idea was document-based: use a different form for each language. Maintenance was of course killing, but the thought was correct. Much later I decided to translate the forms after all. How to go about it? I read something about putting a static key/value map into the model. Bad idea, right? I also thought about other solutions. I could, for instance, add selectors to my form elements in XForms and map those in the client from a different, more flexible location. Or I could translate the form using XSLT or Xquery, and apply xforms to an already translated document. As per betterFORM documentation I chose the first, and it's a mess. How does Dojo solve this? Each widget has a language attribute and can be assigned an nls object from its own namespace or anywhere else. The object is of course malleable at runtime, and if it's not available a default language is selected. As an aside, betterFORM does use the Dojo locale, but, alas, incorrectly (in 4.1).

There is no way I can have the same power in a declarative way when it comes to localization. As it turns out, the publishing company I work for published a standard work on localization. Perhaps I should read it to find my assumptions are wrong. But apart from localization I think the issue remains. Forms are widgets, rarely used in inline text. They should be approached data-centric, at least on the client. When form data needs to be validated or stored on the server it should be done in parallel with processing on the client, which in my opinion should be leading (I'm staying far away from HTML5 forms too). Forms can be easily generated from simple types and a schema. Creating a schema in a GUI is much more user-friendly than writing XML, so this is a MUST. Even if that GUI would create XForms, forcing its rules on a user is a no-go. Yes, I have ideas for such a GUI, but they are not part of this article. The topic was abandoning hope and XForms. Perhaps now it is only fair to admit that I never had much hope for XForms anyway...

Thursday, February 14, 2013

XML is dead. Long live RDF?

I'd choose concept over implementation any time. I kinda always knew that, but I rediscovered this recently. I want to be able to confide in that and in my intuition. It tells me XML is dead. Really. So here goes.

At XML Prague 2013 it occurred to me that RDF means the death of XML. I was discussing RDF with +Manuel Lautenschlager, and at one point he said: you can just infer XML. I tried to get him to elaborate on this statement, but we didn't seem to agree on the implications. But I thought, if one successfully manages to reason about the format of data, then XML would be one of the possible outcomes. This doesn't just mean that XML could be a subset of RDF, but conceptually: XML, its media type and any knowledge about it could simply become part of an ontology.

I discussed this with +John Snelson, who wasn't impressed. According to him, RDF is too fine-grained to present itself as a tree, the serialization would not be performant, and implementing the concept would be more complicated and time-consuming than just using XML. I'm not sure if he rather supports the possibility of embedding XML in RDF, as proposed by his MarkLogic colleague +Charles Greer, who held a talk on the subject. John thought the idea might be interesting in theory, but would not go into practice. But I feel that his approach is still a bit too techie. Data is just data, and if it weren't for concepts developed over the past decades we would still be punching holes in cards. Of course, I wholeheartedly agree that when it comes to computer science, the only way progress can be made or will even occur is when a thought is put into practice, and solves some real-world problem. But in this case, I think Manuel may have had a point, whatever it was.

Yes, for now I see that we shouldn't "infer XML", but the problem of using HTML, XML, RDF and JSON together and at what moment remains a issue. Particularly for myself, because I have a lot of room to experiment and choose the best solution at any time. From an eagle-eye perspective, the world of data just doesn't seem so complicated as to need all of them. Personally, I'd rather loose some things along the way and go back to pick them up again, then to stay put and juggling formats all the way till the bitter end.

Do I really need HTML? No. I need some way to tell a machine: this is a rectangle, this is a bitmap, this is a font rendered at this size and at that location. This you can click and it screams at you, this is just sitting there and will shift like sand when I try to resize it. Do I need XML? Do I? Sometimes a user wants to see what he's actually doing. He wants to see the under-water-screen and understand it. Why deny that? Anyone can understand and write XML (as long as it has no namespaces). Do I need RDF? We all do. We need to finally understand that the world is about local knowledge and conventions. It's the only way we can improve upon the WWW and fight the googly-eyed monster. Do we need JSON? Probably not. We need a way to transport a construct of every-day datatypes we use in our programming language. We're just very lucky JavaScript looks as it does, and I wouldn't for the life of me go back to PHP.

Since JavaScript took off, a lot of worries have faded to the background. But recent ideas like moving RSS to JSON tend to become a little like using a hammer for everything. Just to recap: data is still just data. More and more I get something like: who cares? We'll keep on blogging about the advantages of this over that, and meanwhile the world keeps turning. The main message is: the concepts are much more important, and they are: relations versus multidimensional arrays. Someone told me some time ago he would represent graphs in trees no matter what, just for the sake of having a user interface that can be navigated in a traditional way. And only now I see that, yes, so should I. So we have come to a full circle...

Trees are dead. Long live graphs. In the form of trees.

Tuesday, February 12, 2013

XML Prague 2013 Afterthought

An anniversary is supposed to be a happy occasion, but at some point you also tend to feel sad. You sense that when something reaches a certain age, it's also a step closer to death. Happy birthday dear XML.

However, if MicroXML will succeed XML (as proposed by Uche Ogbuji), then perhaps it means that ENQUIRE is going to replace the WWW. Some weeks ago I watched the unveiling of Nintendo 64, when we got all the cool games we still play today. Afterwards I wondered: how can it be Nintendo developed all this stuff back in '96, when I'm still struggling with namespaces? Happy birthday dear me.

I read Michael Kay's blog entry on MicroXML, and his concern for namespaces in XSLT. I understand this concern, but xquery doesn't seem to share this problem. Defining a module from a URI and mapping it to a local namespace is common practice, and recently found its way into JavaScript in the form of require.js, which sails under the flag of the Dojo Foundation. I don't see any problem with that, but I do see what terrible things can happen when you encode modules into your data. It's a bit like trying to draw Java depency management into RDBMS. It would have killed SQL instantly.

To continue musing on JavaScript some more, the require.js pattern was devised to solve the problem of how to download modules from the web asynchronously, and still be able to use them at the proper moment in the application. Although this is a typical requirement for web applications, it does add asynchronism to the stack. I wonder, what's Kay's approach to this in his XSLT for the client implementation? I know that eXist has a function in xquery that can spawn a new thread, and discussed the possibility for asynchronous functionality in xquery with Wolfgang Meier. It seems a lot can be gained from looking at node.js and the way synchronous versus multithreaded programming is handled there.

But perhaps by now the following question has risen: why asynchronous processing of XML, or perhaps at all? Well, lots of reasons really. Say I want to run batch jobs, and I'm certain my machine can take much more load, but is simply waiting to finish the thread. Or I send an http request and don't want to wait for a response. Sure, but why in xquery? Back to JavaScript again. The main problems I face with it on a day-to-day basis is that developers don't know or care about functional programming and immutable data. Moreover, they don't feel any need to write semantically sound code. And yes, we now have require.js and dojo patterns like deferreds and aspects, but JavaScript developers are still relying heavily on delegation, monkey patching and closures.

Another problem with JavaScript that struck me after listening to Juan Zacarias on JSONiq: it doesn't have a good querying interface for it's own bloody data model! Oh, that's right, it doesn't even have a data model ;-) how silly of me. It's just an in-memory construct of what was already available. Why not put it in a database and pretend it is a model and... ok I'll stop. The query interface in JavaScript was never properly fixed by jsonquery, but RQL is a much more solid attempt. Too bad the tla sucks, it does what it needs to do. One problem solved, but still a few to go. When I look at some code that I have to work with or extend upon in Dojo the hairs stand up on the back of my neck. And yet we all know it's the best toolkit out there...

Wouldn't it be more proper to have a client-side implementation of (asynchronous) xquery? Sorry, master, I mean no disrespect, but XSLT doesn't seem to do it for me. Not in this form anyway. Nor does ClojureScript by the way, with it's terseness and steep learning curve. I will leave you with an open question: what should the data model look like?