A few moments ago, totally out of the blue, I got a phone call from an employment agency.
They were following up on details, they said, from the CV I sent through a recruitment website sometime last week. I had actually applied for a particular job (a nine-to-five sales assistant vacancy for some unnamed trade retailer) but I didn’t expect to hear anything more about it. Hence my surprise at this call, especially since it wasn’t even about the job I had applied for.
So I wasn’t exactly prepared, and mumbled some garbled nonsense about my job preferences for a couple of minutes. (I’m much more articulate when I can write things down, I swear.) They said they’d let me know of any available positions in a few days.
I won’t hold my breath. But even if they do call back I have a feeling I won’t be suited to anything they’d have to offer. I’m just too picky.
Today brings the debut of Omnivore, a new creative writing site based on the theme of taste.
Now I know that might seem very restrictive at first glance, but let the editor, Tom at Infovore, explain things more clearly:
As the tagline says, the good things in life taste good. Omnivore is a creative writing site that subscribes to that mantra. All the writing on it is inspired by taste – be it the taste of food, of places, or of people. At regular intervals, a new piece of writing by one of the Omnivore team will be uploaded for your pleasure. It could be anything – narrative, commentary, recipe, list. All that matters is that the writing is as tasty as the experiences contained within it.
I bring this to your attention because, yes, I am one of the Omnivore team. I thought it was high time to branch out a bit, to flex my creative muscles, so when the opportunity arose I lunged at it, quite hungrily. (Many thanks to the ed. for having me aboard.)
My first contribution should be available for your reading pleasure in a couple of weeks. Until then, pay a visit and allow my colleagues to entertain you.
Oy! I know the Catholic Church is pretty fucked up, but giving makeovers to the dead? _Is nothing sacred!?!_
Yesterday was the fourth anniversary of my initial foray into publishing on the World Wide Web.
My first attempt at a website, a horrible mess by my current standards, went live on the 15th of October, 1999. I had zero knowledge of HTML back then (the whole idea of it intimidated me) so I created the site using Frontpage Express. Not only was the markup atrocious, but it looked awful too: bright orange text on a black background. I cringe when I think about it now.
A few months later I redesigned, this time with white Arial text on a teal background (which switched to blue sometime later). It looked classier, certainly — there was definite evidence of a better design sense — but it was still pretty bad.
In mid-2001 grey was my background colour of choice. The site was neat enough and easy to navigate, and looked quite snappy provided one was browsing with Internet Explorer (yes, I was still using Frontpage, unfortunately) but like most homepages there was very little substance to it, save for the diary.
I had started the diary months before, writing enormous long-winded and rambling entries, and only updating on average three times a year (I was that motivated!). It was a heap of steaming self-indulgent crap, to be honest with myself, and I didn’t feel much in the way of sorrow or regret when I eventually removed the diary from the web in the autumn of 2001. Neither did I feel in any great hurry to fill the void.
I had attempted blogging once before, at the end of 2000, prompted by a newspaper fluff piece about the nascent phenomenon. If I had known that Blogger existed back then I might have stuck with it, but instead I had to go with one of the services recommended by the article, Groksoup (which doesn’t even exist anymore). I think I made about three or four posts before I got bored and gave it up. I just didn’t get it; what was the point of making a boring list of websites that virtually no one would see? Who would want to read it? And why would I want to waste my time doing it?
Almost a year later, I got it. I started visiting Mat Honan’s site, which in the spring of 2001 morphed from what would be considered a common-or-garden static homepage into a dynamic cornucopia of information. Before I discovered his weblog I felt that the web was a dead space, that there was nothing new to find. Reading Mat’s various writings removed my blinkers and introduced me to a whole new world, the democratic face of the web. I quickly learned that weblogs weren’t just about boring lists of links, nor were they merely diaryesque spoutings of everyday tedium. Weblogs were — weblogs are — whatever you want them to be.
It was still some time before I decided to jump in the deep end and try it again for myself. I registered with Blogger in October of 2001, set up a weblog as an extension of my old homepage, and made my first posting on the 1st of November. Second time lucky: I caught the blogging bug in an instant. At last, my presence on the web had a purpose. Within a matter of weeks the old homepage was dead, while the weblog became the star attraction. (Meanwhile, on the design front, I ditched Frontpage, learned HTML from scratch and started hand-coding the site. This was a big turning point for me, to be sure.)
And now, here I am. I’ve been keeping this weblog for almost two years now, though it feels a lot longer, and I’ve come a long way. I have a simple, clean and professional-looking design (one that I’m actually happy with!) created by hand with standards-compliant code. I also have a vague idea about where I’m going in terms of the information I contribute to this ‘textual repository’, as the weblog has been so humbly appellated. Plus, I seem to have attracted my fair share of readers who take a peek to see what’s happening every now and again, and I’m ever so grateful for it. (I would like to take this opportunity to extend my most sincere thanks to you for visiting, whomever you are, and hope that I encourage you to return.)
But I don’t do this just to attract an audience. First and foremost, I do this for myself. I do this because I live to think, to contemplate, to philosophise, because I have opinions and I want to share them. I do this because I hunger to write, and the more I write the better I get. I do this because I want to make my mark, however insignificant in the big scheme of things. I do this because I enjoy it, and I’d like others to enjoy it too. I do this because I love it.
I didn’t know when I began this journey four years ago just how much it would mean to me today. I wonder how I’ll feel about it four years from now.
Mark Lawson writes in Saturday’s Guardian on the case of Georges Lopez, the school teacher featured in the excellent documentary Être et Avoir (here’s a review I wrote over the summer) who is reported to be suing the film’s producers for a share of the profits arising from its phenomenal and unprecedented success:
Because viewers of Etre et Avoir (sic) tend to view Lopez as an antidote to ambition and capitalism, this news seems depressing. But the teacher may be exploring an important – and influential – area of law. Someone who wrote a book about their life would need to be paid if their story became a basis for a film…
…[h]owever, documentary – in television and cinema – has traditionally used the public as fodder for either no reward or a token payment under euphemisms such as “disturbance fee”. The justification would be that documentarians are reporting rather than adapting someone’s life, and journalism should not be paid for.
But this justification doesn’t cut it when money comes into the picture. In the past, documentaries would have been much more equivalent to television journalism, at least insofar as theatrical releases were rare and publicity virtually non-existent. In contrast, Être et Avoir has proven to be a huge money-spinner at cinemas worldwide, no doubt fuelling future video sales, and the filmmakers alone are reaping the financial benefits.
Times change. Documentaries aren’t just about art or journalism anymore, nor are artistic or journalistic value the only motivation for making them — in an industry run by bean counters and driven increasingly by the need for rapid return on investment, they can’t be. (It hasn’t happened overnight, but it has been a recent phenomenon: films like Hoop Dreams and the work of Nick Broomfield spring to mind.)
So now that the goalposts have been moved, shouldn’t M Lopez (and the children in his class at that) be entitled to some share of the wealth generated by the film’s success? When I first read Lawson’s article I was going to argue that maybe Lopez was pushed into the lawsuit, possibly by friends or colleagues who felt he was being exploited for little recompense (after all, he’s the guy on the screen, not the production company). But thinking about it more, I’m sure that if I were in his shoes I wouldn’t need anyone to push me into it; I’d feel plenty aggrieved anyway.
Especially over the last decade, personal image rights have become more and more important. So, if the makers of Être et Avoir are profiting on the ‘trademarks’ of the unique personalities captured on film, is it not right that those same personalities should be awarded some share of the proceeds?
Here’s another question: does this case make Lopez a spoil-sport? One could argue either way. It certainly appears, in Lawson’s words, _”ironic that M Lopez, celebrated by lovers of the film as a representative of old-fashioned values, has become a prominent advocate of the new-fashioned ones”_ (of course, his lawyer might well have something to do with that). Yet on the other hand, the filmmakers could be seen as using artistic or journalistic value as a shroud for capitalist interests at the expense of their subjects.
It wouldn’t be right to set a precedent whatever way you look: either documentary subjects sign away their rights for nothing, or the making of future documentaries becomes prohibitively expensive. There must be a middle ground that would made everybody happy.
It’s certainly a shame to see a work so beautiful as Être et Avoir tainted like this by the stain of filthy lucre, but it’s a situation that needs to be addressed nonetheless.
Matt Webb has recently posted a great string of entries on extelligence (the sum of the cultural sphere that we, as intelligent beings, both experience and contribute to) and its relationship to presence — that’s presence in terms of _telepresence_, or the emulation of real-world intercommunicative presence through technological means. (Don’t take these definitions as gospel — I just like to define ideas in my own words, to see if I understand them correctly; if I don’t, then challenge me! Discourse leads to greater knowledge, after all.)
The invention of the telephone probably gave us the first example of real-time intercommunication without physical presence. (I’m not counting the telegraph; despite its superficial resemblence to contemporary SMS text messaging — in terms on the restrictions imposed upon the user by the medium anyway — it was never exactly real time, nor intercommunicative.) But at least with the telephone, the voice you hear and speak to provides verification of physical presence in its absence; in recent years, as people have had the capacity to communicate in real time over computer networks without ever having to see or hear those they communicate or interact with, this paradigm has been abstracted much further.
It appears that the lack of verification (in most instances anyway) is the reason why the Internet as a whole, as it is today, is always (and for most people, forever will be) seen as a poor relation to actual face-to-face contact, even though its plainly obvious that a) the Internet facilitates communication and collaboration on a scale that would not have been fathomable even 20 years ago, and b) we don’t actually _need_ physical presence to achieve collaborative goals, even though it does seem to help.
As of late, there’s been a backlash against this pejorative trend from the emerging technologies camp. At least, that’s how it looks from my point of view, with the popularity of real-time collaborative technologies such as Hydra and so on, which when you think about them are essentially just the professionalisation of the chat room concept: certainly it’s a more targeted, refined approach, as opposed to the free-for-all atmosphere of traditional chat rooms, but is it really this great revolutionary breakthrough? Or merely mutton dressed as lamb? (Feel free to challenge me on this point!)
If there is a backlash, a better case would be made in terms of text messaging and its effect on mobile phone usage (in Europe, not so much in North America — the Far East is a whole other kettle of fish, and complicates the picture even further). In my own experience, I’ve had a mobile (okay, _cellular_) phone (the same phone I have now, as a matter of interest) for over four years now, since just before SMS was enabled on my network, so for all intents and purposes I can say that I was there from the beginning. And it has to be said: sending a text message is preferable to making an actual phone call for a number of reasons. Not only is it cheaper, but I often find it quicker and easier — especially if you don’t have the time, or don’t want, to talk to someone. Plus, for some reason, more often than not it still _feels_ like there’s a sort of richness or presence to it (for example, a few moments ago I got a text message from my girlfriend which, far from being a symptom of technological depersonalisation or depersonification, actually infers her presence more than it signifies her absence, and in turn all the things I love about her that make me go all mushy inside).
(On the other hand, even though I was there from the beginning, I never really got into the whole _txt msg slng_ thing: surely such abbreviated language is only necessary if the message that needs to be communicated is more than 160 characters long (or whatever the alloted limit is)? Maybe my habits are completely anomalous? Does this alienate me from the mobile phone culture? In terms of my participation, does this significantly reduce my level of extelligence? I wouldn’t previously have thought so, before I started thinking more deeply about it here.)
There’s a good reason why the symbiotic metaphor is used to describe our relationships with our phones: for the vast majority of us who have them, we would find life quite difficult — not impossible, but difficult by virtue of our delevoped habits — to live without them. (I say _phones_ when I really mean _any_ mobile communications device, but at present in Europe only phones have significant and suitable prevalence.) I ask, what’s happening to our extelligence in this scenario?
How can this concept be tied in with the bigger picture (our intelligent relationship with modern technology as both a facilitator and a surrogate for traditional intercommunicative standards/methods, and in turn this technology’s profound effect on our extelligence and _intention_ (in the logical sense of the word) with regard to our experience of and place in society) in order to devise a more inclusive/conclusive theory?
Or something like that anyway. Notes and comments are more than welcome.