Anthropomorphizing Technology

I’ve just read an extract of Clay Shirkys Cognitive Surplus book in the Times along with a very good interview about him and other web gurus. Unfortunately you have to pay to get Times articles these days (hmmm. Ironic) but there’s a good review if it in Guardian. There are lots of good videos on YouTube of him talking about the concept of cognitive surplus so I encourage you to listen to them.

Clay Shirky

…anyhow I could spend the rest of the year dissecting Shirky’s writing because I love his enthusiasm and agree with much of what he says but what I wanted to get out in this post is the fact that people are really anthropomorphizing technology. He does it and Skirky has particularly emotional prose about the internet and how when he used the internet in 1992 it was an emotional experience for him (his brain flipped out!), his compatriots do it when they write about the internet and technology and we’re all doing it as a society.

I was out drinking with Martin Weller the other week (always a bad idea) and we got to talking about the fact the friends of ours talk about a piece of technology with such irrational love and affection that to an outsider it seems bizarre but to us it’s quiet normal although we might not always share their love of a particular technology. Some people at the OU for example love FirstClass because we’ve used it at the OU since the mid 90’s and some feel a kind of ownership of it that others might not.

It’s not just ownership though but a sense that the technology is life enhancing.  Take the recent ‘buzz’ about the iPad. When all is said and done the iPad technology is not a big leap forward from that of tablet PC’s or indeed from Apple’s own iPhone but it really got into people emotionally in a way that I haven’t seen a technology do to the same extent before. It was slightly scary to see the reaction of some people to it and how they talk about it as if it is a living breathing thing.

I think there are two distinct patterns here.

1. A kind of addictive quality to new technology where it fills a gap that people never knew they had.

2. A sense of ownership and stakeholding for technology that has been around a long time and has given that person a wholesome experience over a sustained period of time so that they have become personally involved with the technology in a way they wouldn’t have imagined when they first saw it.

Both these have parallels to relationship building. The instant attraction of new lovers, and the slowly growing deep love of long term relationships.

…I don’t think that’s a coincidence.

Convergence v Specialism

I’m very interested in the trend with devices such as XBox 360 towards a convergence of media types and delivery with it’s support of Sky TV through the XBox and broadband via Sky Player – Stephen Nuttall from Sky was quoted as saying: ‘Our partnership with Xbox is a further example of our commitment to put choice and control in the hands of customers.’

I’m particularly interested in the ‘blurring’ or perhaps integration is a better word between the different media types so the idea of interactivity around watching a football match whilst downloading stats and also interacting with other fans is cool, also concepts around adding value to experiences through ‘back channel’ activities is something becoming more prevalent, as is the concept of ‘on demand’ services.

I think the really interesting stuff will be when the boundaries between an interactive TV experience, a gaming experience or an internet experience all disappear to the extent that they become platform neutral and coherent rather than bolt on things. The announcement of the Boxee box earlier this month is a step in the right direction, this really is opening up the rich resources and putting power int he hands of the users. It also means that you no longer need to get content ‘produced’ on a TV channel in order to get your content to a large audience, consumers become producers.

I’m very interested in using gaming technology and interactive TV in more powerful ways to develop engagement and learning, supported with internet they become extremely powerful tools.

kids, computers and change

I haven’t been blogging for a while because I’ve been involved in the logistics of moving a university depratment to a new building this week (think herding cats and you get the picture). Now that I’m back I’m going to make up by having a bit of a stream of consciousness about slightly connected topics.

1. My kids have managed to ruin my computing at home by spilling water over the keyboard which has taken out the ‘n’ key and the space bar. I tried using the on-screen keyboard and it’s like pulling teeth – it is worse than trying to do a full blog post via txting. I found it seriously hard work so I’ve resorted to the laptop. It’s these small uphill battles that make my online experiences so erratic, or perhaps I’m just making excuses however other people seem to be ‘always on’ and in my experience power plug issues, network connectivity, kids and other various things always get in the way of me doing this. I talked to a colleague about the transient nature of my online stuff when we were trying out plaxo recently. I was trying to get my twitter friends into it and then tried using my facebook contacts, but the interface started to annoy me and I was five seconds in without success and about 5 seconds away from giving up so I do think that the ten second rule than NN (Neilsen Norman) the usability gurus used to apply to websites still holds (in my experience) to web apps. In 2006 the BBC had this down to 4 seconds for commercial sites selling goods.

2. Janet Street Porter did a rant in the Independent on Sunday in her editorial about how all our details are being exposed and exploited by, for example, YouTube and the fact that studies show that people using the internet and social network for long periods have trouble making real friends and that relationships for the next generation are going to suffer. I hardly ever agree with JSP and my views are significantly different to hers on this but I do think that getting the public/private stuff right on the internet is difficult. I tend to be very cagey about myself because I do prefer to keep my private life a closed book, knowledge is power and you never know when that slip of the tongue might come back to bite you. Other people however are totally very open and I find this refreshing but also a bit disconcerting. I’m a very shy person and I expect that comes through with how I act online and choose to reveal myself in the virtual world. I don’t worry about how kids will deal with real relationships by the way. They’re just finding new ways to communicate, not replacing the old but enhancing these.

Twittertrends

I see that Tony Hirst has added a post showing some nice graphs showing the trends showing potential growth of Twitter. What is they say about lies, damn lies and statistics? – It’s interesting to note trend data like this because things such as Secondlife get big blips in popularity and I think it’s when something new has been created within the space. Twitter shows steady growth but as Tony says the figures lie. Twitter is certainly filling a gap that existed as judging by the response to my recent post and the number of people who arguing very strongly in favour if it.

I’ve been recommended by a friend to read a book called “The Future of the Internet and How to Stop It” (Jonathan Zittrain). There’s a review of it in this months BBC Focus magazine too. According to the review he argues that the end of the internet as we know it will be because of the lack of creativity and people turning away from the web because of the lack of control and the prevenlence of malware and viruses, moving to more ‘locked down’ solutions. I haven’t read it yet but it does sound like an interesting read. I’m off to get it.

My failing memory and fear of going outside

I have just started to use Remember the Milk which is set up to do all those things that you always mean to do but never actually get around to doing. It’s got a good range of web 2.0 type integrators with twitter and phones and google calendar etc. My problem is currently that I’ve not had the time to remember to put reminders into RTM to remind myself to do stuff. I’m now having to put a reminder into RTM to remind me to use RTM to put stuff in! – It must be my age.

Actually one of the developers in my team (Nick) is using RTM and the RTM API to provide some elements of the Social:learn project. I think Social:learn is a fantastic concept in that it’s exploring methods of learning that are much less intitutionally (provider) focused and much more learner focused which is exactly how it should be. The investment is relatively lightweight at this stage as it’s largely glueing, adapting and sharing data across a series of existing tools and architectures but the potential is huge if even one of these applications takes off and I think it’s exactly the right approach to explore in the ‘post VLE’ era where people are less concerned about where they get information from than they are about what the data is and how it will help them.

In the same vein colleagues of mine are discussing moving wholesale away from using the institutional systems to provide them with email, scheduling, document sharing and many other business functions but instead moving over to using external providers for this (e.g. Google) and the arguments against doing this are now being outweighed by the arguments for it. I still have some reservations though and so a group of us are going to explore this and work alongside the central services provider at the OU to see how well these things work to meet staff needs.

Here are some of the argument against (in my opinion)

1. Stuff less secure and more open to attack

2. External providers can disappear or have services out of action when they’re needed.

3. External providers have no responsibility to maintain the (free) services for users.

4. The amount of space you get may not be adequate for your needs

5. There is no “institutional branding” on emails etc. coming from external engines.

6. What happens if things go missing, there’s no backup or retrieval mechanism.

7. There is no (institutional) support for dealing with configuring external clients or services.

8. Stuff coming through external providers may be prone to interception or blocking.

9. It doesn’t integrate with other instituional services.

Here are my responses…

1. The security on services externally is now as good as security internally. The bigger more established players have invested much more time and money into methods of ringfencing and securing data than anything that a public sector institution could do.

2. The datacentres used by most external hosting providers have levels of redundancy which again outstrip anything that could be provided by a single institution. Their businesses rely on keeping the services up 100% of the time, they have massive contingency and failover options in place to ensure that individual parts can be removed without the service failing (there is a good talk that I went to last year as part of the Future of web apps conference by Matt Mullenweg the guy who developed Wordpress about this very topic).

3. This is true although advertising revenue helps to ensure that they have a need to try to maintain free services and these are also the methods people use to get ‘hooked in’ to the next teir of services so not providing these would be catastrophic. Also the big players rely on the number of users they attract, so having a failing free service would soon stop them from operating.

4. This is no longer true, in fact external providers can generally provide more space than any instituional service provider can meet. Email is an example of this with Gmail being vast in size compared to the meagre limit set by the institution (50Mb?).

5. This can be got around. You can do work on the headers to allow them to show that it’s from another account and you can add the instituional signature to emails etc. I think this is problematic though and the header stuff can mean that your mails get trapped. I would suggest though that it may not be the most terrible thing in the world if emails come out from an account which doesn’t have the institutional domain. There have been occasions for example when local email has been down and the IT people here have all switched to Gmail and other mail providers and external IM systems in order to keep in touch and keep exchanges of information running.

6. There can be backup and retrieval. I think it’s to do with how you manage your account. You can for example get POP mail to keep a copy on the server and also download so you can have versions stored by your local mail client periodically. You can set up routines to do this automatically.

7. I think empowering users to help themselves is always a good thing and takes burden off IT support. External systems tend to be very easy to use and configure in order to attract customers. I see that as a good thing for organisations.

8. There is some truth in this however it’s a manageable issue. I’ve heard of people not receiving mail that they should have and others being blocked or blacklisted because the mail server they use is on some blacklist. It’s manageable because if you find it happening then you can do things about it. You can switch to another mail server, you can reduce the likelihood of your email being used by spam bots and you can watch what filters are being applied to incoming or outgoing mail.

9. I would say that the opposite is true. Institutional services tend to be siloed. Internet provided services tend to have open API’s and talk happily to many other tools and services. If they don’t then you can build the integrators yourself.

Finally the reasons FOR going outside..

1. cross browser, cross platform, cross system, cross organisational access to services.

2. No issues or barriers to use.

3. More space to use and store data.

4. Better for sharing.

5. Doesn’t require a complex infrastructure to use (similar to 1 but slightly different in that I’m talking about the dependancies and platform requirements, for example having to use VPN from home and having the Office 200x suite installed on top of Windows)

6. Always available!

Corporate Authentication Systems (hell!)

I’ve been struggling recently with the ‘enterprise security system’ in place at the OU. This is some obscure system invented in-house (by sadists) to authenticate people against our systems.

It works OK most of the time but it’s not standards-based. It doesn’t talk LDAP. It doesn’t talk to other authentication systems in any meaningful way. You need to set it up on every service you run. You need to set up ‘tokens’ in every directory of web servers where it’s installed to tell them who to allow in. etc etc.

We have a myriad of great systems in the university but they are being hamstrung by the fact that we can do any kind of meaningful pass through authentication. Luckily a colleague of mine has invented a mechanism for getting the  system to work in harmony with OpenID and we’re close to achieving some way to allow us to work with other systems more meaningfully in the future. I’m very frustrated about it now though because although the current system works reasonably well for people in the OU there is no reasonable way of allowing ‘authenticated visitor’ or ‘logged in public’ access in any meaningful way, we can of course merge authentication systems for a particular services (as I do) but this gives problems later when the same visitors want to access other OU services.
I’m not sure how much of a problem this is elsewhere but I would guess that the lack of a decent authentication and user verification service has put the OU back several years in development time because every new project with a mixed user community(OpenLearn being the most recent example) will have to find some sort of individual workaround. Central services don’t see a problem because most of the services they provide are staff only (or student only) and therefore it’s simple for them and anyone else doing development across user spheres just has to find their own solution.

Rant over I’m off for a bath now!

Policing the internet..

This is a topic we covered as part of the “Future of Web Content” discussion and I wondered how long it would be before things in the real world started to catch up. Not long since we’ve now got proposals for policing the internet specifically to find and remove people who may be illegally downloading music. There an article about it on the BBC News website.

Three things that interest me about the proposal

1. ISP’s says it will be impossible to realise. I think that’s in line with what I was suggesting in my piece on FOWC.

2. The method of removal of service is interesting because it’s also a method I was proposing for dealing with virus spreaders, except mine was more subtle but the idea of cutting them off gradually from the web. The problem is as pointed out today on the BBC that it’s indescriminate because if you cut off based on an IP address you cut off a whole family (or internet cafe station or library terminal etc.) it’s not targetting the individual necessarily.

3. The internet is jammed full of kiddie porn and suicide websites and freakishly mad and deviant stuff and it’s interesting the the first attempt in this country to police it ‘en masse’ is caused by the fact that fat cats are worried about losing their royalties from record sales and it’s driven by commerce not by any kind of moral, social or conscience driven imperitive. I think this is quite shameful personally, I’m not against protecting copyright but I think there are other methods of protecting copyright and there are other things to police.

4. The people that do it will find a way around it within a few weeks so it’s pointless.

Future of Web Apps (day 2)

Day two of the web apps conference was much like day one and I could describe the talks which would perhaps be interesting but nothing more than looking on the FOWA site and checking the tools out.

I met up with some colleagues and after the morning sessions (on SlideShare by Rashmo Sinha and The Future of Presence by Jyri from Jaiku and Felix Peterson of Plazes) and we discussed the fact that there are a plethora of tools that are aggregating feed of other services to provide you with a presence generater (where am I what was I doing, what do I intend to do) this can be a very good or very bad thing and people seem to be divided on whether it’s useful or not for them. For example why do I want to be part of the dopplr community when I don’t do any serious travelling (this is the further I’ve been in a couple of years) and those people I know that do travel has someone (usually their secretary or partner) who knows where they are! :^) – I can see how it would be good if I was part of a crowd of frequent travellers, so dopplr has a user group but it’s just not for me.

This made me think about the themes emerging form the conference so I’ll share these instead and they actually correlate with some of my predictions which I included in my earlier future of content blog. This is pretty good since you can assume that about 80% of predictions never come to pass.

So here are the themes I took away

(i) Entropy and Chaos – Website builders can’t predict how people will use their sites. You can’t simply throw people off since this makes them come back with a vengance so rather you need to ask why are they using the site in this particular way? – it could be that they have an idea that needs to be pursued. This is how Dogster and Catster came to being because people wanted to share pet profiles with each other but couldn’t do it on other “about me” type spaces (they tried and the administrators remove the pet ones, ha!). You need to adapt to users needs and keep control to a minimum at least on social networking sites.

(ii) Omni-visual-presence – (almost but not quite godlike!) – having your presence available to everyone in the world, “where am I”, “what am I doing”, “where am I going”, “where have I been”. Big mashups are being created around presence and the aggretaion of dynamic content (calendaring, twitter, mobile location sensing (plazes) and so forth) to create a real sense of the real you.

(iii) Semantic web or just screen scraping? – The semantic web is still proving a pig to bring to life and the demo’s I saw around it were disappointing to put it mildly. Slightly clever screen scrapers. It doesn’t mean it wont happen but it’s not here yet.

(iv) Web apps developers are still extremely geeky – Not a problem really just an observation. There were a lot of navel gazers and people with silly tee-shirts.

(v) The big players are still running the show – I think that Google in particular is doing so much that it would be foolish to ignore the big ones and focus in on the tiddlers.

(vi) A lot of the ‘ideas’ were variations on a theme – There were a lot of similar developers working on Facebook Apps, Social networking sites and spinoffs, ‘washing lines’ or ways to collect together data for you (aggregators), publish on demand systems and presence helpers (twitter et al).

Future of Web Apps?

I went to the Future of Web Apps conference today in London http://www.futureofwebapps.com – It was an interesting day but as with many of these things I sometimes wonder if I couldn’t have got much of this by just researching on the web, it’s good to talk to the people involved though as I always like to hear the “what not to do’s” which are sometimes more important and something that people don’t like to admit to unless you’re talking to them in person.

The speakers I enjoyed were Heather Champ founder of JPG and Community Manager as Filckr. She did a duet with Derek Powazek of JPG and they had a good talk about what builds communities (or drives them away) with some common sense advice including the fact that ranking can be a big negative as it engenders a “gaming” to the community where they start to compete and also even those in the listing with high rankings tend to feed off the fact that there are others above them so it generally a no-no except in special cases. I discovered this myself when we started a top ten ranking of things in our Knowledge Network (a system we developed for Knowledge Sharing at the OU) – we discovered that if we ranked it top to bottom people always clicked on the top one always enforcing it’s status as the top, therefore the list becomes self perpetuating. We decided instead to gather the popular sites and then randomly display these in random order on the front page to show a variety of popular sites but which was fresh and different. …Anyway they had much more to say and I recommend their talk.

 The other speaker I enjoyed was Matt Mullenweg the funding developer of WordPress (and I’m not just saying that cos I use it!) – He looks about 12 and his powerpoint presentation is not special but what he says is real and his advice is sound in my opinion.

In general the speakers were all good although some a bit to techie for me (the Dojo, Ajax and Google Gears stuff was great but I nearly lost the thread (multi threading in javascript by the way – cool) a couple of times when he displayed a few code pages. Some was a bit too commercial, all about ‘monetizing the web’ but largely good.

The things that disappointed me were that I was bombarded with a hundred different ‘sponsors’ or mini site vendors Blurb, widr, yuuguu, wakcopa, pluck, baagz, zend, etc. so why the stupid names guys?!? – It was a bit overwhelming, I’m just about getting my head around Facebook and MySpace and now I’ve got aobut 50 more to explore (if I get the energy!). They weren’t disappointing in themselves, it’s more that I know that nine out of ten of them will be gone in a year or taken over by Google, Yahoo or Microsoft.

Second thing to disappoint me was that the presentations were largely ‘death by powerpoint’ – These so called designers and web app develoeprs actually put togehter pretty ropey presentations, I’ve seen my colleagues give much better ones and I was surprised that more presenters didn’t rely on a more off the cuff approach and a creative talk that was more dynamic, but then again I probably wouldn’t do that myself if I was speaking to several hundred peers!

Those things aside the conference is good and Im looking forward to tomorrow once I recharge my batteries. I do think that as one speaker said ‘even in the virtual world people like to end up with an artefact, something real and so it’s worthwhile thinking about how you can give them that’. I think the conference gives us that – it’s the reality around all the virtual.

The Future of Content (Part 4) – Version 0.9

I’ve been asked by Martin Weller to comment from a technical perspective on the future of content as part of an experiment as explained by Martin here we are jointly creating a series of posts about the future of content.

Already there is Part 1, Part 2 and a reply Part 3, and here is Part 4.

Martin has a very optimistic and Utopian view of content and I think he is arguing from a content providers view. I’d like to explore things from a users viewpoint, in particular I think that the integrity or authority associated with the content is an important part of deciding how it might be used. It’s worth looking here at how Wikipedia lives alongside The Encyclopaedia Britannica for example? Wikipedia has come in for a lot of flak about the inaccuracy of data or the credability of it’s authors, Patrick McAndrew points to the “faking it” approach to knowledge. Jim Giles contends that it is as accurate as Britannica according to an expert led study conducted by Nature magazine. This was of course contested by Britannica and depends very much on the data and techniques used. The issue for me here is not that Wikipedia isn’t a great resource and can’t be used alongside Britannica but that it cannot be relied upon without having some complex method of screening and giving authority to content and publishing. I’m not contending that it isn’t accurate, what I’m saying is that it’s can’t be guaranteed to be accurate.

 Take the analogy of a car, you can buy from a dealer with a guaranteed warranty and peace of mind of knowing that if things go wrong then someone else will sort it out for you. You can buy second hand from a non dealer network and have a limited warranty and less comeback or you can be given a car for free, which seems excellent until it goes wrong or is found to be riddled with holes. Content provided free on the web can also be riddled with holes, however it has it’s place and I said earlier how Wikipedia sits alongside Brittanica because they both have their place.

Cory Doctorow gave a speech in which he said “New media don’t succeed because they’re like the old media, only better: they succeed because they’re worse than the old media at the stuff the old media is good at, and better at the stuff the old media are bad at. Books are good at being paperwhite, high-resolution, low-infrastructure, cheap and disposable. Ebooks are good at being everywhere in the world at the same time for free in a form that is so malleable that you can just pastebomb it into your IM session or turn it into a page-a-day mailing list.”

 And here’s where I pull the content issue forward a step. In the “good old days” people with visual impariment had to make do with books and someone interpreting or reading to them or alternatively converting to braille and eventually audio tapes were produced. Now people with accessibility issues can interact with media in ways they never could previously and share with others their new found knowledge, we are using devices like screen readers, eBooks, talking books, MP3 players and PDA’s to bring old media to much larger group of people than ever before. Surely this can only be a good thing? – Martin Weller contends that  the thought of having his book store digitally looks tempting to him “I like having books as objects on my shelves, but I used to like having vinyl albums and CD’s also, but now I only have MP3’s”. I think that books will always have a place in our society but maybe like vinyl they will be relegated to being object of wonder rather than regularly used items.

I want to talk about some ways print media is being reinterpreted for a web audience. In particular Print on Demand (Amazon et al) as a method of providing a traditional media (books) with reduced overheads that you can pass on to the consumer in a way that people can get what they want, when they want at a reduced cost, but not free! This works well and I think is a compromise between the corporate “publishing control” of the big publishing houses and the Utopian but potentially flawed free and open access materials. There is also of course a growing number of people using MySpace and YouTube for publishing material at the lower end of the spectrum. Sam Jordison writes a thought provoking piece on the subject, in particular he says “Most attempts have been doomed to failure because the website just doesn’t offer the same advantages to the printed word as it does to music (after all, it’s far easier to listen to a three-minute song than to read a novel, or even a short story, on the site’s notoriously badly designed blog interface). Nevertheless, these literary MySpace pages, complete with links to samples of their work, attract a large network of online “friends” who share similar tastes and interests.”

He then goes on to add “with the net the worst that can happen is that you’ll hurt your eyes. “There’s also every chance that you’ll be find something you like, you can put it in your favourites to watch how the writer develops and follow the links he or she provides to more like-minded authors. That’s the beauty of it.”

In my opinion there is something to this but also to Ray Corrigans contention that information wants to be expensive, because it’s so valuable. I worked on a project last year looking at creating a bartering room on the web to allow companies to ‘buy-in’ to academic knowledge delivered personally for them. This project was based on a model developed by a Dr Hans-Peter Barmeister in Germany who had companies such as Boeing and Hewlett Packard clamouring to work with them but who wanted to pay for the information because they wanted (a) Exclusivity , (b) A guarantee on the integrity of the study and information provided and (c) A tailored summary or extract from a wider research  study. The counter argument that this was largely freely available anyway and they could ‘filter’ it themselves doesn’t hold water, they want to pay for expert knowledge, expertly extracted.

This leads me on to a subject of security and integrity. If a community is closed then control over that community can be managed easily. As the community grows so does the complexity of the information, therefore eventually and control mechanism will break. According to Schneier’s law anyone can come up with a security system so clever that he can’t see its flaws. The only way to find the flaws in security is to disclose the system’s workings and invite public feedback.

So where does that leave us? – You’ll note that I haven’t mentioned web 2.0 yet and I don’t intend to, why? – Because the internet and the web are evolutionary concepts and what interests me is not a collection of current technologies branded as 2.0 but rather the  directions the web is taking (and content thereof). So in conclusion here is a potted list of predictions based on what I’ve been involved in researching..

(1) A cashless and cacheless society

As knowledge becomes increasingly ‘on demand’ the need for caching information disappears, there is no such thing as a TV schedule in the traditional sense, information is provided to individuals as they need it, just in time. transactions take place in the background (look at the ‘touchless payment’ cards being brought out now for a preview of the future).

(2) Personalised filtering

There will be better ‘background intelligence’ services developed to filter content, providing “authority” information, ensuring quality of resource and integrity of content, they will be user centred and adaptive to suit individuals. Look at where the semantic web is going for a preview of this and what Google in particular is doing to leverage the capabilities of it’s powerful search engines in more tailored ways. This will inevitably lead to the merging of the “Wikipedia” and “Britannica”.

(3) Ambient and Ubiquitous

Two words I hear a lot and really describe how the content providers and services will disappear from sight but at the same time be everywhere we need them providing us with tailored and contextually aware information. The intelligent fridge is an example of this but a more useful one perhaps is the use of geocaching for tourism, where you can provide an interesting and tailorable guide around a place (city, village etc.)

(4) A smaller divide between the “have” and “have nots”?

The web will grow tremendously and more content will be freely available I think that our society will be less divided than ever (at least western societies) because people from low income families will now begin to benefit from the advances through more public and free access to media. Access to technology may increase but the ability to use it correctly remains a problem that needs to be addressed.

(5) Those who think they control information will get a wake up call

As the amount of information increases and access is widened then governments that seek control of that information will find that the more they try to control the more things squeeze out at the edges.

(6) Systems will target viruses not everyone else

The current system for dealing with viruses (lets put up a firewall and close everything down!) is fundamentally against the original principles of the web and is deconstructive. I believe that the use of localised security measures will soon be abandoned in favour of ‘search and destroy’ targetting and isolating viruses, this may mean an intelligent “turn out the lights” approach virus control. I believe that in the future we’ll get so good at it that viruses cease to be an issue (I wish!).

 (7) People will become the user interface.

There will be no such concept as a good user interface because we’ll be that interface. The way we want to see stuff will be completely our domain and controlled by the individual.

(8) Technology will diversify not integrate?

A controversial one here but I think that the integration of everything onto a single device is reaching it’s limit and in fact people are waking up to the fact that using a mobile phone for watching video is like listening to radio on the TV, a bit gimmicky and something people rarely use or only in cases where no alternatives exist. I think there will be much more in the way of alternatives and people will have more freedom of choice of devices and technologies.

(9) Combining in new ways for added advantage

I think that things like combining eInk (electronic Paper) and ePen technology will bring us around to providing added advantage over traditional technologies and we will see this coming of age when issues about power and wireless network access cease to be limitations to their use. I think we’ll find they improve on the traditional and allow us more freedom.

I think I’m waffling now so I’ll stop but I’ll add more useful references to this and possibly proof read it when I get more time, in the meantime back to you Martin!