Source Database: 
Source Entry URL: 
Source Entry OAI-PMH Identifier:
Author(s) of the Source Entry: 
Sumeya Hassan
Source Entry Language(s): 

Re:Cycle III is an extension of my previous generative video art piece Re:Cycle (exhibited at ELO 2012). The current version is part of an ongoing exploration into the combined poetics of image, sequence, motion, computation, and meaning. The Re:Cycle system includes a database of video clips, a second database of video transitions, and a computational engine to select and present the video clips in an unending stream. The computational selection process is driven by a set of metadata tags associated with the content of each video clip. The system can incorporate video clips of any content or visual form. It is currently based on nature scenery: mountains, rivers, ice, snow, waterfalls, trees. (Future versions will incorporate urban and human imagery.) The original version was completely committed to the aesthetic of ambient experience. Like Brian Eno's "ambient music", it was not intended to capture or hold your attention. However, it was required to give visual pleasure whenever you did choose to gaze at it. As the system is evolving, this commitment to ambience is gradually giving way to a more engaged and prolonged experience. The change is driven by the incorporation of increased semantic and visual coherence. The original version relied completely on random shot selection and sequencing. An early modification introduced a low level of semantic coherence based on simple metadata tags. The current version has taken this commitment to semantic coherence further. First, the shots are getting more varied, and the tagging system is getting more complex. This increase in the variety of the metadata textual tags is amplified by the application of more complicated algorithmic sequencing processes. The old system could present a series of short sequences made up of clips with shared visual content (e.g. -­‐ "trees", or "waterfalls"). The new system will incorporate that short-­‐term sequencing logic, but will nest it within a set of larger segments. The larger segments will be based on more sophisticated concepts of progression, arc, time and closure. The system is based on text at its most fundamental level. The decision making relies on the tags -­‐ descriptors of video clip content. The system reads, selects and sequences using these tags. The driver is text, the experience is visual. At a higher level, the work is evolving towards a more complicated sequencing logic that will combine a heightened sense of flow and progression with an increased commitment to meaning. One can see it as a visual poetry machine, one that has advanced from doggerel to a more expressive semantic and visual output. (Source: Author's Abstract)