“WHO’S TAKEN MY CHALK?”
By Antony Funnell
Keynote speech on education, techno-determinism and technology worship. Presented at the ‘English and History: Common Ground and Different Disciplines’ conference, Melbourne 16th March 2012. The conference was organised by the History Teachers’ Association of Victoria and The Victorian Association for the Teaching of English.
As Zorba the Greek would have said, my education involved the ‘full catastrophe’. Caned into submission by the Sisters of the Sacred Heart I was then handed over to the Marist Brothers at the age of nine, in order to have knowledge flogged into me.
When I was six, in Grade One, I was taught by a withered old nun with a crooked pointing finger named Sister Mary Deriche. She was older than Earth itself with a taste for wearing black at a time when the sartorial trend in religious orders had definitely moved in favour of white.
Sister Mary Deriche had a notable attitude toward technology in the classroom – it was apparent that she wasn’t in favour of it! At least, that is to say, not the new-fangled stuff like biros and exercise books and overhead projectors. At the beginning of every day she handed out some chalk and a pile of small black slates with wooden frames, which we then used to screech our way through lessons.
I’m sure we were the only students in the Western world in 1971 still writing on the original form of the tablet!
By the time the early eighties rolled around, and I was getting ready to leave school, the personal computer was just starting to have a presence. For the record, there was only one at my college and it was located in a small room next to the Headmaster’s office. It was beige and clunky and more of a curiosity than a tool of education or enlightenment.
Fast-forward to the twenty-first century and to my eleven year old son. Now, I shouldn’t speak out of school, but we spend a fortune on his education. I can’t tell you exactly how much, because I’m always afraid to look. Needless to say, it’s in the sqillions. But I’m happy to report that he’s doing exceptionally well – which is probably not a surprise given that his mother has a PHD in education and two of his grandparents are former principals.
My son’s school has a surplus of funds and no shortage of equipment – technology is everywhere. But what real benefit does all that technology have on my son’s education? That’s a question I often ask myself.
I’ll come back to him a little later.
This presentation is about the relationship we have with technology – and its importance in the process of education. But, perhaps more importantly, it’s also about its limits.
Let me tell you about a theory I have.
We are currently living, I believe, in the second great age of techno-worship. A time when people have come to believe that technology has the answer to all problems. I say second, because the first great age of technology veneration began about two hundred years ago. You see, the Victorians also lived in an era of constant change driven by rapid advances in technology.
In 1838, the second year of Queen Victoria’s reign, the paddle-wheeler Great Western kicked off the modern age of intercontinental transport by crossing the Atlantic under the power of steam, not wind. Over subsequent decades, railways networked Europe, the United States and eventually the rest of the world. Skyscrapers were born, electric lighting was invented, along with the telephone, motion pictures, the machine gun, modern artillery, high-speed printing and the automobile – to name just a few of the many achievements that were realised during that period. It was a time of technological advancement at a phenomenal, often disruptive pace — not unlike today.
We remember the Victorians for being rather dour – for building and squabbling over empires, but arguably at the core of the Victorian spirit was an overarching faith in technology: an approach, a belief, a trust that all problems could be solved by clever design, by newer machines, by better science.
Tom Standage is the author of the book The Victorian Internet and he’s also the Digital Editor of the Economist. When I talked with him about the similarities between our modern technology-focussed society and that of the Victorians, he speculated that it was only the bloodshed and destruction of the Great War that finally brought about the end of humankind’s initial period of techno-worship.
The argument being that after the carnage of World War One, technology began to be seen more as a double-edged sword: it was still viewed as a means of advancement, but in the wake of the Western Front it was also treated as a point of fear, of distrust. People suddenly realised that it had the potential to make their lives more vulnerable, to change things – in a very big way – for the worst. And if you think about it, that fear of technology mixed with wonderment was clearly there in the science fiction of a great part of the twentieth century: Metropolis, Gattaca, 2001: A Space Odyssey, Blade Runner, Total Recall, and so on.
If we have now entered the second age of technology worship, as I’ve already boldly declared, when did it start? Well, you could mount a plausible argument that a dormant utopian belief in all things tech was revived by the phenomenal growth of computer power in the very late stages of last century. Computers suddenly began to enhance existing technologies like never before: factories became fully automated; neurologists undertook delicate brain surgery via a robotic arm and a 3D digital display; and aircraft could suddenly fly and land themselves without the need for a pilot.
Corresponding with that explosion in computing power and its application, we also saw the end of the Cold War. Nuclear weapons still pose a threat, of course, but with the demise of the Soviet Union and the arms race between the US and the USSR, suddenly the ever present risk that technology could at any moment see the entire planet blown into a wasteland began to disappear in most people’s consciousness. And subsequently, technology lost some of the menace that first smeared it during the Great War.
So, like the Victorians, many of us now believe in the idea that technology is a sort of universal solution to all of society’s ailments, that the only limitations are adequate funding and finding the correct technical expertise. It’s a belief that’s found fertile ground in many fields.
When US soldiers won the initial conflict in Iraq in 2003, but failed to prevent an insurgency, the Pentagon’s answer wasn’t just to deploy more troops, but to significantly boost funding for new technologies and their deployment. Unmanned drones began to be powered remotely from desktop computers in Virginia. And a state-of-the-art robotics program called Future Combat Systems was quickly rolled out, receiving more than $US230 billion in initial funding.
The Director of the 21st Century Defence Initiative at the Brookings Institution in Washington, PW Singer, told me in February 2009 that military robotics were taking on a new primacy within defence planning. He said: ‘I think the extent to which science fiction is becoming battlefield reality will surprise and maybe even scare people.’
And he mentioned that he’d spoken to a three-star US general who told him, ‘“Very soon, we’re going to be getting to the point of talking about tens of thousands of robots.”’
We also now live in a world where it’s commonplace to credit digital media and its various platforms with bringing down tyrannical governments – as though blogs man barricades or tweets repel bullets.
Appearing on CNN early last year – as Egypt’s Mubarak regime was unravelling – Middle East-based activist Wael Ghonim echoed many commentators in his unequivocal praise of social media. He declared: ‘This revolution started online, this revolution started on Facebook,’ before telling the presenter Wolf Blitzer: ‘I always said that if you want to liberate a society, just give them the internet; if you want to have a free society, just give them the internet.’
It’s a lovely thought. It got lots of air-play. But, of course, it quickly proved to be not quite that simple. In Lybia, talk of the revolutionary power of digital media all but died away when Moammar Gaddafi and his forces decided to hold fast and dig their bloody heels in.
And still today, for those unfortunate enough to live in a rebellious part of Syria, Facebook doesn’t really offer much protection against Bashar al-Assad’s tanks and missiles. In fact, quite the reverse, Syrian state security are quite good at using social media to track dissidents, as are the Chinese authorities, the Iranians and most other oppressive regimes.
But perhaps I’m being picky.
Then there’s the truly extreme end of the tech-worship spectrum.
Followers of the Transhumanist movement – believe we’ll soon have the technology to cure all disease, prevent the aging process and live forever. Their flag-bearer is a Cambridge-based gerontologist named Aubrey de-Grey, a man with a wild Rasputin-like beard and a great sense of media.
There’s also the closely related Singularity movement. A movement which is enormously influential in technology circles in the United States – and Australia – and whose followers believe in the primacy of machines over people. The basic idea being that computers will one day become so smart they’ll surpass the human brain, and then eventually meld with our own species to exponentially improve human intelligence.
I kid you not. It’s a weird mixture of science, science fiction, IT, new age jargon and serious money.
The followers of the Singularity have even set-up their own university. It’s headquartered at NASA’s Ames Research Centre in California and receives considerable support from major technology companies and US government institutions.
The computer scientist and early pioneer of virtual reality, Jaron Lanier, seems much too countercultural for organised religion, but he claims he sometimes feels surrounded by the techno born-again.
Says Lanier: ‘There is a funny religious quality to it, particularly in the world of elite engineering. A lot of us have come to believe that the internet is coming alive and turning into a giant global being of some kind, a sort of collective intelligence that actually becomes almost godlike. And there’s a set of beliefs that recreate the beliefs of traditional religions around this perceived being. So for instance, there’s an afterlife because the collective super-being of the internet will become so capacious that it’ll scoop up the contents of all our brains and give us everlasting life in virtual reality, and so forth.’
And he adds, ‘If you think I’m exaggerating, I’m not. A great many very serious and very powerful and rich people in Silicon Valley buy in to this set of ideas, so there is kind of a religious sensibility driving some of these designs.’
I mention these things because I think there is a tendency for different professions to view the intrusion of new and digital technologies in isolation – as if it’s only happening in their world. But from the military to broadcasting to exhibition-curation to teaching, we are all in a process of re-evaluating and renegotiating the role that technology does and should play in our workplace – in assisting us to perform our tasks.
We’re also trying to sort out the difference between fact and fiction – between reality and what Professor Graeme Turner, the director of the Centre for Critical and Cultural Studies at the University of Queensland calls the ‘aspirational rhetoric’ that so often gets built up around the internet and digital media.
Specifically now to the education sector….
And here’s a little anecdote I love concerning Nicholas Negroponte, the head of the One Laptop Per Child Project – the OLPC.
Negroponte, by the way, likes to throw his computers across the room at press conferences – to demonstrate how strong they are. Knowing, I guess, that we journalists love those sorts of theatrics. Anyway, at a conference in the US late last year, he proposed a radical way of distributing his devices – he told his audience he wanted to ‘literally take tablets and drop them out of helicopters’.
Tablets being tablet computers, of course.
Nicholas Negroponte is pretty clear about the fact that he believes teachers just get in the way of educating the poor and the illiterate. Get the technology into the hands of the kids and they’ll start educating themselves, seems to be his approach. It’s a belief that goes by the appallingly jargonistic name of ‘techno-determinism’.
Technology, according to the techno-determinists, makes a difference simply by its presence – by its very existence. And the people who subscribe to that idea, aren’t always the ones you’d expect.
Professor Richard Heeks, from the Centre for Development Informatics at the University of Manchester, is one of those who contributed to the United Nation’s 2010 Information Economy Report. Heek’s has been a vocal critic of the way in which the development community approaches education – specifically the use of Information Communication Technologies (ICTs). But even he believes that technology can have transformative power in and of itself.
Here’s what he told me when I asked him about techno-determinism. He said: ‘We have often argued that you need so many other things to be put in place other than a technology. You need skills, you need political change, you need regulations to change, you need mindsets to change, and so on. I think all of that is undoubtedly important, but I do get signs of organic change when ICTs arrive in a community, that the communities themselves start to find new ways of doing development as a result of ICTs coming into those communities.’
In truth, it’s not hard to see the appeal of the techno-determinist message. Go to the One Laptop Per Child website and you’ll find a series of shots of dirt-poor children grasping shiny new laptops. The imagery is tailor-made for a gadget-loving Western audience. An audience who deeply want to believe that combating illiteracy is as simple as plugging in and logging on. But the curious thing about the techno-determinist approach is that it continues to attract admirers even though the proof of its efficacy is scant indeed.
Since 2005 the One Laptop Per Child project has distributed more than two million computers in developing countries – with a quarter of those devices ending up in Uruguay – a poor country no doubt, but one where literacy is already quite high.
Now, in Uruguay, the project has received official blessing and support, but despite that, in 2010, the Uruguayan government conceded that only around half of the country’s teachers were using the laptops their students were given for educational purposes. One of the reasons put forward for that, was a lack of training in how best to use the computers in a classroom setting.
And it was a similar story in nearby Peru. An Inter-American Development Bank report into the use of the OLPC laptops in that country, found that only around 10 percent of teachers said they’d been given adequate technical support – and only 7 percent reported receiving ‘pedagogical support’.
The study’s authors wrote: ‘In the classes that were observed during the qualitative evaluation, it was noted that laptops were being regularly used, but in most cases their use has not substantially changed practices.’
Which probably shouldn’t come as a surprise: giving laptops to teachers in Peru and Uruguay, without giving them training in how to best use them, seems to me a little bit like giving someone a car but not explaining the road rules.
Let me now tell you about Kentaro Toyama.
Toyama is a researcher in the School of Information at the University of California, Berkeley. And you could hardly accuse him of being a Luddite: he has a degree in computer science from Harvard, and back in 2005 he co-founded Microsoft Research India. As he tells it, he was once a firm believer in the idea that Information Communication Technology can change the world simply by its very existence, by its very take up. In other words, he was a full-blown techno-determinist.
But what was once belief has now turned to deep scepticism.
Toyama caused waves in 2010 when he penned an article for The Boston Review entitled ‘Can Technology End Poverty?’ Simply put, he now believes technology only really changes people’s lives when the recipients of that technology specifically and consciously want to use it to achieve change, and only if they already have the capacities to follow through on such a goal.
In all other situations, he argues, the adoption of technology largely just amplifies existing behaviour.
Here’s how he tells it: ‘I used to lead a research group in Bangalore for Microsoft,’ he says, ‘and what we did was to spend our time trying to understand how technology impacts very poor communities, both in rural villages as well as in urban slums. I was certainly optimistic, in the sense that I felt that we could use these powerful technologies in some way that would really help and impact very poor communities.’
‘But over time,’ say Toyama, ‘what I kept finding was that even in our successful projects, the impact of the technology depended entirely on the people who were either manipulating the technology from the outside, or using the technology from the inside. In both cases, what we found was that you needed well-intentioned, competent people using the technology in order for the technology itself to have a positive impact.’
In layman’s language: Toyama says people who are given new technologies simply use them to enhance or extend the sorts of activities in which they’re already engaged. So, for example, those in a village engaged in commerce will use the computers they’re given access to for business. But those who aren’t interested, for whatever reason, in education or intellectually improving their lives, will simply use their PCs for entertainment and little more. Which is neither good nor bad in itself, but could lead to a gigantic waste of development resources.
And, if you follow Toyama’s logic — that technology doesn’t change behaviour, it merely amplifies it — then it’s not hard to imagine that at its worst, a blind devotion to the power of technology as a tool of international development could actually lead to a widening, not a tightening, of the information and wealth divide between rich and poor.
So what’s the Toyama solution?
Well, Kentaro Toyama says, ‘if you’re not tied to using the technology, consider seriously whether it’s the technology that will help, or whether it’s some investment in human capacity that will pay off more’.
Then, he says, ‘if you are invested in using the technology, ensure that the technology is applied to an existing social institution that’s already having a positive impact – so that the technology is in support of a working system.’
I like to sum it up like this – Literacy before laptops? And people before PCs.
As the California-based technology entrepreneur Rose Shuman once said to me: ‘It’s generally a good idea – or a sort of universal design ideal – that technologies should be subordinate to human instincts, and to human logic. And the most successful technologies cut across what country you live in, and are really about making technologies analogous to something that’s already in your life.’
Now, here’s a sobering fact that few people realise – particularly those who get overly excited about the potential use of the internet as a tool for education and advancement in the developing world.
In India there are more than 1.2 billion people. But, the number of regular users of the Internet in that country is only around 80 million.
I know this is a room filled with English and History bods, not mathematicians, but it’s not hard to work out that that number represents less than one percent.
And here’s another surprise – the truly global ICT of the early twenty-first century isn’t the web, according to the World Bank, but television. A 2010 study by the bank found that while almost two-thirds of people in the developing world have a television – up by 25% in the last ten years – internet access remains at less than 10%.
And as Charles Kenny, the Bank’s Senior Economist told me at the time, that’s unlikely to significantly change anytime soon, because, he said, ‘using the internet successfully really does take a fairly high level of quality education, which is sadly lacking in many parts of the developing world.’
OK, at this point let’s now return to my son … and his super expensive school.
He’s in Grade Seven and he’s always had excellent teachers. So we’ve been lucky on that score. But as computers and various digital devices and platforms have begun to enter into his school experience, what I’ve noticed is the way in which they have, on some occasions, led to a narrowing rather than a broadening of his ability to explore and to express himself.
A great many of his assignments are now done on computer, and for reasons of online safety, the number of sites the students in his class are allowed to visit in order to gather pictures, text-based information, video graphics and audio is extremely limited.
As a parent with concerns about online safety, I can understand the desire for limitations. But I can’t help feeling that the end result is a sort of deadening conformity. Assignments all look and read very much the same, because the students are drawing their material from the same limited sources. And a simple class exercise – say, making a brochure about renewable energies, for instance – ends up being an exercise in digitally cutting and pasting professionally pre-produced artwork and graphics, rather than being an expression of a child’s own artistic creativity.
There’s also a point to be made, I think, about the way in which our children are being taught to interact with computers.
In his 2011 book Program or Be Programmed, the celebrated US media theorist Douglas Rushkoff argued for a back-to-basics approach when it came to computers and education. Society will do itself a great disservice in the future, he told me in an interview just after the book’s release, if we don’t once again start teaching the young about the actual nuts and bolts of computing.
He said: ‘In the same way we think of it as important for kids to know basic math and long division, I think kids should understand the very basics of programming. So that when they operate a computer they don’t think of the computer in terms of what it’s come packaged with, but they think of the computer as a blank slate.’
And he went on to say: ‘It’s just like introducing kids to reading and writing. You show them books, but you also give them blank pieces of paper where they can write their own words. I feel like those few schools that do teach computers, teach kids not really computers, they teach them Microsoft Office, which is great for creating the office worker of the 1990s, but not for creating the people who are going to build the twenty-first century.’
A similar sentiment was expressed recently in Britain’s Observer newspaper by John Naughton.
Naughton is professor of The Public Understanding of Technology at the Open University in the UK and he wrote: ‘We have to accept that ICT has become a toxic brand in the context of British Secondary schools. However well-intentioned the thinking behind the ICT component of the national curriculum was, the sad fact is that it has become discredited and obsolete. As a result, educational thinking about the importance of computing and information technology in this country has been stunted for well over a decade. We’ve taken a technology that can provide “power steering for the mind” and turned it into lessons for driving Microsoft word. We wouldn’t dream of teaching pupils about German culture without expecting them to speak German. The same holds for computer science.’
And he then added something of a warning: ‘the world our children will inherit is one that will be shaped and controlled not just by physical realities, such as climate change, but by computer software.’
Now, the focus of John Naughton’s article wasn’t only the state of the British education system. It wasn’t all doom and gloom. He also had some good news to deliver about the release of a very small, credit card-sized device that’s just been developed at Cambridge. It’s called the Raspberry Pi.
I have to confess, at first sight, the Raspberry Pi doesn’t look all that impressive – it’s a mini computer-board. But it has two interesting features. The first is that it’s incredibly cheap – it costs only around $32 a unit. And the second is that it’s been specifically designed to allow people to once again be creative with computer programming.
Apparently, as long as you have a keyboard, you can plug it into your TV and create your own computer. And because it runs on a Linux operating system – in other words, it’s all open-source software – you can use it then to do your own programming, to write your own code.
Naughton and others hope the low cost of the device will appeal to schools, even under-funded schools. And – cross-fingers – educators will be able to use the Raspberry Pi to begin teaching children about the basics of programming.
Certainly one to watch.
So, in summary, what are the lessons?
Well there are several points to take away.
One is the need to ensure our computers and other digital devices work for us, not us for them. We should use them only when they enhance our work or leisure experience, not just because they’re available and they’re convenient. Or, worse still, because there’s an expectation that they represent the future.
Connected to that, is the corresponding need to be conscious of the deskilling that can occur by an over-reliance on technology. Do we ever really get to know a city, for example, if we only ever navigate our way through its streets by using the GPS device mounted on the dashboard of the car?
Of course, the giving over of certain skills in the embrace of technology has long been an issue - just think back to the controversy over the arrival of the pocket calculator into classrooms back in the 1970s.
But modern communications technology is arguably far more pervasive and a lot more subtle and persuasive than previous generations of gadgets. Programmes are written and marketed by technology companies to give the illusion of choice and involvement, when often they’re really just steering the user down a pre-ordained path. In such circumstances the avenues for true creativity are necessarily limited.
And secondly, in a world that blindly worships its gadgets and devices, it behoves us to be far more questioning of the multi-national communications and technology companies that push us toward their products. Many people forget, for instance, that Facebook is actually a marketing platform – it’s not a town square or a public service. It exists solely to make money for its owners.
And here’s another example of what I mean – earlier, I mentioned Wael Ghonim in the Middle East.
He’s been interviewed innumerable times in the last year or so about the Arab Spring and the role of social media in bringing down the Mubarak regime. In fact he’s currently doing a global book tour. Ghonim is always referred to as a ‘social media’ or ‘online’ activist. But what’s not mentioned anywhere near as often is the fact that at the beginning of the Arab unrest, he also happened to be head of marketing for Google in the Middle East and North Africa.
When you know that, having Wael Ghonim preach to you about the revolutionary powers of the internet, is a bit like having the marketing manager of Bunnings telling you why it’s essential to upgrade your power tools.
As Ethan Zuckerman, the Head of the Centre for Civic Media at MIT told me last year: ‘We need to be consumers who look at technology in terms of what its possible politics are, what its possible effects are, and then decide how we use it, and how we adopt it, based on what we think those politics are’.
So – embrace your inner sceptic!
Now, in closing, let me be clear about one thing.
I‘m not suggesting there’s anything wrong with technology – particularly its use in education. There is nothing wrong, per se, with incorporating social media, Youtube videos, Garageband and Google Docs into the classroom, unless it’s being done for its own sake.
That technology should be seen as a tool toward an end, not an end in itself, might seem obvious enough, but I can tell you that even in my own profession – journalism – there are many, including myself, who worry about the tendency of media organisations to rush toward establishing a presence on new platforms, at the expense of quality content.
We are though, I believe, at a bit of a turning point, where, after a decade or more of techno-boosterism and techno-utopianism, we’re finally beginning to hear the voices of those eager for a genuine debate about balance – the balance between the benefits of technology usage and its negatives. And even Michael Wesch, the associate professor of cultural anthropology at Kansas State University, the man I’ve seen dubbed ‘the darling of tech-evangelists in the educational sphere’ now talks about the importance of the teacher, not just technology.
In a recent blog post he wrote: ‘Participatory teaching methods simply will not work if they do not begin with a deep bond between teacher and student. Importantly, this bond must be built through mutual respect, care, and an ongoing effort to know and understand one another. Somebody using traditional teaching methods can foster these bonds and be as effective as somebody using more participatory methods.’
And he then added: ‘Within the broader ecosystem of a college campus, not everybody needs to jump on board with teaching with technology. But everybody does need to be on board with the goal of creating an environment in which a rich participatory culture of learning can grow.’
As I said earlier – Literacy before laptops and people before PCs.