There were points in the early 90’s, when New Media was called Cyberarts, when I had not thought about the role fo the later sage New Media artist. What do I mean by this? When I speak of the later-stage New Media artist, I’m talking about the artist in their late 40’s or even 50’s who has a different set of experiences and skills than the 20-something who is whipping together amazing code-based works. It’s my belief that not only do their mental processes necessitate different practices, but their experiences provide a different civic responsibility for their utilization. However, this also necessitates a delicate statement to be made regarding the role of that artist in the creative process as well.
This position was driven by a conversation that I had with Ken Goldberg when he gave a talk at the Wexner Center at Ohio State University. After his talk, commiserated about hitting out forties, and that it was beginning to be harder to retain new information, etc. Sure we were becoming “Senior” new Media artists, but what exactly did that mean?
At first, I thought this meant that we had simply become older artists who could simply “no longer keep up” with the “kids” because we could not adapt quickly enough to the technological advances confronting us. To consider this as truth is only to buy into the Taylorist techno-capital paradigm of “produce or perish” that allows software companies to bring out questionable updates every year, demanding that these updates keeps the artist (or even company, at broader scales) “relevant”. This keep-up-or-fall-behind mindset, taken to its logical extreme, posits that only those with the most up to date technologies and techniques are the only ones who are equipped to engage with culture. This of course, is a blatant lie except in the case for very small numbers of commercial artists whose livelihoods are directly linked to the specific exploitation of new technologies. For the rest of us, this form of technofetishism has far less impact than one would expect.
Upon reflection as an academic, and in talking with others in the market, the advancing years have far more numerous issues than whether we can adopt (or even afford) new advances. These issues relate to the person in firms and in academia, as well as the older independent with a growing profile. These have to do with the increase of responsibility, the role of history and experience, as well as the changing nature of the older artist’s cognition.
The older New Media artist is called upon to take on larger administrative tasks than those encountered by their younger counterparts. Many, as their careers advance, take on larger numbers and scales of commissions, coordinator/directoral roles, and these responsibilities take the time that was taken up by endless late-night research in the 20’s. Personally, many of us have growing families, families in decline, and more complex personal matters than in the 20’s, although at that age, I could not have imagined more complexity. Although this may seem obvious, to think that one should be managing the festival and coding the website (as only a metaphor) would show a skewing of priorities. Therefore, although a lot of us in the mid-to-late stages might want to be doing all the work, it doesn’t make sense.
Secondly, we have the ambivalent burden of history and experience. In short, this a pragmatic issue, as this longer view might mean that rather than using an Atari 800 computer in an ironic/kitsch role since they had never really used one, one might try to recontextualize the ‘old’ technology in a critical sense because they knew the original context of the Atari itself! This is an apt metaphor for the increasingly complex sets of associations that the older artist (even being in your 40’s constitutes ‘older’ in New Media) constructs. This broader view comes from having more experiences, contexts and contacts and therefore creates the opportunity for more complex sets of associations. Therefore, it might be argued that while the mid-life person shifts from a learning to learning and reflection-based cognition (I say this only from experience and conversation, not from medical evidence), there is much more material that we as New Media artists can draw upon.
Given that the “OG’s” (as Rob Ray of Deadtech once put it to me) of New Media are moving into these positions of experience, historical context and responsibility, what does that mean? This was put very well as a metaphor in a conversation I had with my department chair…
Before saying this, I want to say very delicately that this is not a call for the subjugation of younger artists or adjuncts under “idea people”, that is, people who know what they want to do, but haven’t a clear idea how to do it, whether the current technology can do it, and delegate the work to junior artists and technicians. Although the “Idea Person” is a contemporary metaphor for the master’s atelier, there is also a problem in that they often do not have applied knowledge of the technology they wish to use. This is analogous to creating oil paintings without ever having taken charcoal to canvas. The artist (sic) has to be an individual with at least some applied knowledge of their medium so that they may ascertain the best usage of their resources and tools. In my estimation, the best mid-to-late career New Media artist is the one who can hack together most of an armature or organizational chart with specific methods for a project from which the development team can then flesh out the more intricate (and latest) technical details. However, in my opinion, it’s desirable that the artist, given the time and cognitive resources, they could do it themselves.
The question was what was the role of the mid-to-late career New Media artist in terms of larger and institutional practices. The problem was considering the need for many New Media programs or even practices to have expertise in cutting-edge technology, theory, history, and administrivia, and the time to deal with them all. My solution was that this was the role of adjuncts, junior faculty, and interns, if they could be afforded. This answer is not derogatory in the least; it is a matter of redistribution of skills and experience. For example, many adjuncts and interns are also freelance developers and designers who are more dependent on the cutting edge, and are therefore more adept at these skillsets. The people more linked to new technologies may be logical choices for the more technically-oriented classes. On the other hand, the tenure-track, tenured, studio masters, and so on are often responsible for more administrative/managerial functions than junior associates, limiting their time in learning new technologies. In addition, the experience of the senior New Media artist is something that is uniquely valuable for its novelty (for that historical New Media experience is relatively new), and should be channeled into conceptual/historical/cultural/experimental/organizational praxis. That is, the mid/late New Media artist is placed in
The problem with the pervious proposition is that it appears to create a new hegemony of seniority; this is something I am quite cautious about. The problem is that as I look at my 50’s and 60’s, I’m not going to be learning JAVA10 and writing XML parsers for BioGPS, nor are many of my associates are going to have the time either, not for lack of desire. And as the tools advance, I wonder how people will maintain their engagement without encountering obsolescence. This discussion is an attempt to question the demands placed upon later-stage artists and to derive possible solutions to offer developmental paths for artists in a field that is so defined by technological novelty. I hope that these suggestions are seen as constructive, and I welcome dialogue on the matter.
In the area of New Media, while not necessarily “new” as such (having been in existence for 40 or so years) I’ve seen an ahistorical bent which seems to detach earlier waves of artists as ‘obsolete’. Earlier artists coming to lecture at colleges seem ‘old-school’ to the students because they are not doing live data mining from Myspace (only as a metaphor; but similar instances have been seen). But are they? In my opinion, only to those who are in the context of latest technological advances as being the defining factors of ‘relevant’ art. This is a problem that should be addressed, and that there needs to be a negotiation for later artists to at least have an awareness of later technologies, while perhaps not having the time to know them intimately, knowing enough as to work with teams of differing skills and experiences to create larger works.
Is this a valid model for the maximum ‘bang’ for the mid/senior New Media artist? That remains to be seen, and this is assuredly one model. However, as technology advances and the field broadens, some of these issues need to be considered, and I hope that these suggestions are apt grist for the mill.
Counterculture and the Tech Revolution
GH's remarks made me want to share this essay by old-schooler RU Sirius
JN
Counterculture and the Tech Revolution
By RU Sirius
Back in the day, when people were still asking me to explain "Mondo
2000," I used to tell them that we were doing this psychedelic
counterculture magazine called "High Frontiers" in the mid-1980s and
we were shocked — just shocked — when we were befriended by the
Silicon Valley elite. Suddenly, we found ourselves at parties where
some of the major software and hardware designers of those early days
were hanging out with NASA scientists, quantum physicists, hippies
and lefty radicals, artists, libertarians, and your general motley
assortment of smart types.
I was being a bit disingenuous when I made these comments. "High
Frontiers" already had a tech/science bias, largely because we'd been
influenced by the "Leary-Wilson paradigm." So we were technologically
progressive tripsters. I'd also followed Stewart Brand's work with
interest through the years.
The connection between the creators of the driving engine of the
contemporary global economy, and the countercultural attitudes that
were popular among young people during the 1960s and 70s was sort of
a given within the cultural milieu we ("High Frontiers/Mondo 2000")
found ourselves immersed in as the 1980s spilled into the 90s.
Everybody was "experienced." Everybody was suspicious of state and
corporate authority – even those who owned corporations. People
casually recalled hanging out with Leary, or The Grateful Dead, or
Ken Kesey, or Abbie Hoffman. You get the picture.
But these upcoming designers of the future were not prone towards
lots of public hand waving about their "sex, drugs and question
authority" roots. After all, most of them were seeking venture
capital and they were selling their toys and tools to ordinary Reagan-
Bush era consumers. There was little or no percentage in trying to
tell the public, "Oh, by the way. All this stuff? This is how the
counterculture now plans to change the world."
And while there has been plenty of implicit – and even some explicit
– talk throughout the years about these associations, no one really
tried to trace the connections until 2005, when John Markoff
published What the Dormouse Said: How the 60s Counterculture Shaped
the Personal Computer.
Markoff's narrative revolved largely around the figures of Douglas
Engelbart and Stewart Brand. His book, according to my May 2005
conversation with him on the NeoFiles podcast, covered "the
intersection or convergence of two cultures around the Stanford
campus in Palo Alto, California throughout the 1960s. One was a
psychedelic counterculture and the other was the anti-war movement;
and then you have the beginnings of computer technology intersecting
them both."
Engelbart, in contrast to the mainstream in computer science back
then, started thinking about computers as something that could
augment and expand the capacity of the human mind. At the same time,
another Palo Alto group was researching LSD as a tool for augmenting
and expanding the capacity of the human mind. And then, along came
the whole anti-war, anti-establishment movement of the sixties and
all these tendencies become increasingly tangled as a "people's"
computing culture evolves in and around the San Francisco Bay Area.
What the Dormouse Said is a marvelous read that gives names and faces
to an interesting dynamic that helped give birth to the PC. The story
is mostly localized in Palo Alto in Silicon Valley, and it’s largely
about how connections were made. In this sense, it's a story that is
as much based on proximity in physical space and time, as it is a
story about the evolution of the cultural ideas that might be
associated with that word: "counterculture."
Fred Turner's From Counterculture to Cyberculture: Stewart Brand, the
Whole Earth Network, and the Rise of Digital Utopianism digs more
deeply into how the seeds of a certain view of how the world works
(cybernetics) was planted into the emerging 60s counterculture
largely through the person of Stewart Brand, and how that seed has
succeeded – and how it has continued to exfoliate in new and
unexpected ways. While Markoff's book blew the cultural lid off of a
partly-suppressed truth — that computer culture was deeply rooted in
psychedelic counterculture — Turner's book takes a broader sweep and
raises difficult questions about the ideological assumptions that
undergird our counterculturally-inflected technoculture. They’re both
wonderful reads, but Turner's book is both more difficult and
ultimately more rewarding.
What Turner does in From Counterculture to Cyberculture is trace an
arc that starts with the very mainstream American interest in
cybernetics (particularly within the military) and shows how that
implicit interest in self-regulating systems leads directly into the
hippie Bible, the "Whole Earth Catalog" and eventually brings forth a
digital culture that distributes computing power to (many of) the
people, and which takes on a sort-of mystical significance as an
informational "global brain." And then, towards the book's
conclusion, he raises some unpleasant memories, as Brand’s digital
countercultural elite engages in quasi-meaningful socio-political
intercourse with Newt Gingrich’s Progress and Freedom Foundation and
other elements of the mid-90s "Republican Revolution."
While I welcome Turner’s critical vision, I must say honestly that,
although I was repulsed by the Gingrich alliance and by much of the
corporate rhetoric that emerged, at least in part, out of Brand's
digital elitist clan — I think Brand’s tactics were essentially
correct. Turner implies that valuable social change is more likely to
happen through political activism than through the invention and
distribution of tools and through the whole systems approach that is
implicit in that activity. But I think that the internet has —
palpably — been much more successful in changing lives than 40 years
of left oppositional activism has been.
For one example out of thousands, the only reason the means of
communication that shapes our cultural and political zeitgeist isn't
COMPLETELY locked down by powerful media corporations is the work
that these politically ambiguous freaks have accomplished over the
past 40 years.
In other words, oppositional activism would be even more occult —
more hidden from view – today if not for networks built by hippie
types who were not averse to working with DARPA and with big
corporations. The world is a complex place.
In some ways, Turner's critique of cyber-counterculture is similar to
Thomas Frank's criticism of urban hipster counterculture in his
influential book, The Conquest of Cool: Business Culture,
Counterculture, and the Rise of Hip Consumerism. It, in essence,
portrays hipsterism as a phenomenon easily transformed into a
titillating, attractive, libertine whore for big business.
Frank argues that American businesses felt stultified by the
conformism of the American 50s and needed a more expansive,
experimental, individualistic consumer base that would be motivated
by the frequent changes in what’s hip and who would desire a wider
variety of products. So the hippie culture, despite its implied
critique of consumerism that they inherited from the beats, actually
energized consumer capitalism and, through advertising and mainstream media, the business world amplified the rebellious message of sixties youth counterculture, encouraging consumers to "join the Dodge rebellion" and "live for today."
These books by Frank and Turner raise interesting questions and
challenge most folks' usual assumptions about the counterculture. But
one of the interesting questions that might be raised in response to
these critiques is, "So what?"
In my own book, Counterculture Through the Ages: From Abraham to Acid House (with Dan Joy), on counterculture as a sort of perennial
historical phenomenon, I identify counterculturalism with the
continual emergence of individuals and groups who transgress some of
the taboos of a particular tribe or religion or era in a way that
pushes back boundaries around thoughts and behaviors in ways that
lead to greater creativity, greater enjoyment of life, freedom of
thought, spiritual heterodoxy, sexual liberties, and so forth. In
this context, one might ask if counterculture should necessarily be
judged by whether it effectively opposes capitalism or capitalism's
excesses. Perhaps, but complex arguments can be made either way, or
more to the point, NEITHER way, since any countercultural resistance
is unlikely to follow a straight line – it is unlikely to reliably
line up on one side or another.
These reflections may not be directly related to one of Turner's
concerns: that an elite group of white guys have decided how to
change the world. On the other hand, one might also ask how much
direct influence the last decade's digerati still has. The "ruling
class" in the digital era is an ever-shifting target; all those kids
using Google, YouTube, the social networks, etc., don't know John
Brockman from John Barlow, but a good handful of them certainly know
Ze Frank from Amanda Congdon.
Meanwhile, the corporate digital powers seem to be pleased to have an
ally in the new Democratic Speaker of the House. And that may be the
coolest thing about the world that Stewart Brand and his cohorts have
helped to inspire. In the 21st Century, the more things change, the
more things change