Stefanos & Diaa
Since emergence is a concept with several (sometimes vague) definitions, it is useful to (a) review some of the definitions that are relevant to the context of our work, and (b) to converge to an operational definition we will use.
We have been reading a number of informative references on emergence in the fields of interactive and generative arts. All these references define emergence as the situation in which a new “macroscopic” form appears spontaneously from the arrangement of many “microscopic” parts that interact with one another via simple rules, in a way that cannot be straightforwardly anticipated by inspecting the rules and parts themselves. There is a lot to unpack in such a definition, and we will try to clarify some aspects of it here.
In generative art, emergence is most frequently related to artificial life processes used to generate the artwork. It consists of setting number of (re)production or (re)arrangement rules among a set of interacting artificial entities in motion and simulating the evolution of the system for many generations in a computer, until an unexpected pattern emerges. Examples of such processes from the physical world include termite mounds or microbial colonies (you can also see some examples of this in Montse and Jo’s blog). This type of emergence is often classified as physical emergence and its computational counterpart is called computational emergence. Note that the definitions of physical and computational emergence make no reference to an audience: they do not rely on the presence of an observer.
A contrasting definition is that of perceptual emergence: instances of emergence that rely on the perception of an observer. This is exemplified in the above picture, in which a shape emerges only upon observation. This is particularly relevant to interactive art, where there is a sense of “openness” in artworks, i.e., the participation of the audience is required for the artwork to be completed.
In this project, we will implement and study links between computational and perceptual emergence in an interactive and generative art project. The autonomous agents will be the participants, whose actions will determine the discretized agent states that will be used as inputs. These individual input states will interact in a virtual generative system that is sufficiently complex to exhibit computational emergence, such as, e.g., artificial neural networks. The output of the system will be presented as feedback to the participants, making the establishment of computational emergence manifest. The collective perception of the emergent pattern in the output will in turn accomplish the interactive perceptual emergence.
We have put together a 5-month action plan to develop and implement the project:
Diaa & Stefanos
From this week onward, we have decided we will be blogging collaboratively instead of individually. This seems more appropriate currently, as more back and forth is going on, and we would like the blog post to serve as anchoring and checkpointing for our progress.
Despite some terseness on the blog the last few weeks, we have been busy developing the conceptual basis for the experiments we want to perform and document. Upon discussing with Julia, we have concluded that by the end of the residency we would like to have a full road map for the project. We will then move on to the fun tinkering stage! We will start fleshing out more practical ideas and discussing relevant literature from next week onward, but this week we wanted to converge on a first draft of an abstract of what our project will be about. It is this:
Our goal is to introduce, document, model, and exemplify an interactive art protocol that employs emergent collective behavior as its main methodology. By emergent, we mean that simple interactions between agents in the artwork lead to complex results that are impossible for an individual agent to achieve. One can say that emergence is achieved when a system of many agents globally exhibits behavior that goes beyond the simple sum of the actions of the agents. The agents can either be participants, or their avatars in some virtual environment. We call this protocol designed emergence.
Interactive art created using the designed emergence methodology generates a collective experience or “memory” that is impossible to achieve on the level of the individual. Even though this collective experience is intended to appear when agents self-organize spontaneously, it can be either “planted”, i.e., fully or mostly predetermined by the artist (the Mexican wave can be thought of as a simple illustrative example of “planted” emergence), or generative, i.e., not predetermined by the artist.
Collective behavior typically arises when large numbers (think hundreds, thousands, or millions) of constituents of a system interact with one another. It is hence important to devise a modeling framework for designed emergence, so that practitioners can prototype and benchmark their ideas without the need to perform large-scale experiments. We will therefore implement a modeling scheme, where participants are replaced by simple artificial intelligence units. This will allow for computer simulations of designed emergence, with various degrees of realism, to be performed.
Through this introductory statement, several specialist terms are used in a particular meaning that may be different in this scope compared to their original definitions:
Collective behavior refers to the spontaneous and unstructured behavior of a group of people in response to the same event, situation, or problem.
Within the context of the above introduction, collective behavior indicates to the action of viewers emerged in the artificial situation generated by the autonomous system of the interactive artwork.
Virtual environment means the artificial environment built digitally using virtual reality techniques.
Within the context of the above introduction, the virtual environment is designed to produce autonomous outputs, with which viewers can interact by feeding back their action as a new input to be processed to generate the next set of outputs, and so on.
Collective experience is a common experience that people may witness as a result of interaction with a particular situation.
Within the context of the above introduction, collective experience refers to the common memories gained from a common course, however, it may be interpreted positively or negatively according to the different viewers’ related existing memories. Within this approach, the role of existing memories in knowledge generation could be investigated.
Autonomous system is a mandatory component of a generative environment, in which the participants cannot guess a particular output of the system, and also the creator cannot systematize the interactive setup to produce predetermined outputs.
Within the context of the above introduction, the autonomous system is the artist-made system that generates the virtual environment.
If A, B, C, and D, are four pieces of information, they have been stored in a human memory “M”
When A, B, C, and D, pass (stored and then recall) through “M” they will be Am, Bm, Cm, and Dm
IR: Internal revising processes (revising information by passing it through the existing related
information in our memory.
ER: External revising processes (revising information by passing it through the new acquired
related information in our memory.
IT: Internal time (time spent since the new information has been stored in our memory, or
Internal revising time)
ET: External time (time spent since the related acquired information has been gained)
RP: recalling processes (procedures that responsible for evoke the stored information under a particular circumstances).
A ≠ Am
B ≠ Bm
C ≠ Cm
D ≠ Dm
Am= A+ IR + ER⟶ through (IT & ET)
Bm= B+ IR + ER⟶ through (IT & ET)
Cm= C+ IR + ER⟶ through (IT & ET)
Dm= D+ IR + ER⟶ through (IT & ET)
The information is processed neurally in our brain, where Axon holds impulses carried away from cell body to Axon terminals.
But, however, the differences between daily neural processes of information and information of memory is in Time! in which information keep still in our brain but not as it was in its original state. We usually reject several information and arguments in our daily life, but after a while some of them may be accepted in our brain. Such processes -in which a particular argument gained a level of justification- have been executed (consciously or unconsciously) during the internal and external revising processes inside our memory and vitally affected by the time spent in these processes. Within this context the role of memory in generating knowledge is vital!
Artificial Neural networks (ANN) can simulate the normal information processes by modulating biological neuron mathematically.
In its basic level, the mathematical model of the biological neuron can be algorithmically implemented by raising the concept of perceptron, like this:
One of the key elements of a neural network is its ability to learn. A neural network is not
just a complex system, but a complex adaptive system, meaning it can change its internal
structure based on the information flowing through it. Typically, this is achieved through the
adjusting of weights. In the diagram below (fig. 3), each line represents a connection between two neurons and indicates the pathway for the flow of information. Each connection has a
weight, a number that controls the signal between the two neurons. If the network
generates a “good” output (which we’ll define later), there is no need to adjust the weights.
However, if the network generates a “poor” output—an error, so to speak—then the system
adapts, altering the weights in order to improve subsequent results.
Within this context there are two keywords to be considered:
Techopedia explains Weight as follows: In an artificial neuron, a collection of weighted inputs is the vehicle through which the neuron engages in an activation function and produces a decision (either firing or not firing). Typical artificial neural networks have various layers including an input layer, hidden layers and an output layer. At each layer, the individual neuron is taking in these inputs and weighting them accordingly. This simulates the biological activity of individual neurons, sending signals with a given synaptic weight from the axon of a neuron to the dendrites of another neuron.
2- Activation function:
Activation function is a function used to transform the activation level of a unit (neuron) into an output signal. Typically, activation function has a “squashing” effect.
Coming back to our first assumption, via adding time coefficient to weights of neural units and activation functions, investigating the positive/negative effect of time on the justification rate achieved through memorization mechanism would be reflect the role of our memories in generating or revising knowledge.
If we want to define memory, can we think of it as a Bio/neural repository?! If so, How can we define the capacity of that repository? which kinds of inputs saved in it, and by which methods ?
The adjective Bio/neural refers to material and functions. While our brain is made of biomaterial like the rest of our bodies, it beforms countless neural tasks every second to manage, create and interpret all signals (information) in order to orchestralize all visible and invisible human bodies’ functions.
One of the most mysterious things regarding memory, that it has no a particular place in our brain. It is just processes and functions facilitated by certain mechanisms, in which informations are treated and stored in order to constitute a wide ranges of our cognitive awareness!
Trough our human five sensory channels unlimited kinds of information are submitted and stored in our memory, which is able to recall them whenever we need, like so we constitute our own knowledge bank through the whole life.
But the point is, through this neural processes, what is the nature of that recalling processes, by which the stored information come to our attention again in order to use it? Does memory just resend the stored encoded information (in the form of neural signals) into our attention to be re-decoded and used as it was in its original state, or through the encoding and decoding processes the stored information are being re-treated and re-interpreted through the recalling processes?
If yes, so our memory is not only for saving information, but it plays a serious role in generating knowledge based on the capacity of quantitative and qualitative content of its stored information regarding a particular matter!
It is critical that neural networks remember some information to do more meaningful tasks:
It is desirable that intelligent systems have an external memory attached to it. Research has shown that we could have a model of working memory (also known as short term memory) that assists neural networks. The brain has a working memory which can be used to fetch and write data. The computer traditionally have cache which is temporary memory to quickly access information that can be called working memory.
(ABHIJEET KATTE MAR 22, 2018).
Currently, I am going to build a ANN-based computational model of memory in order to investigate a simple example of how information could be processed ( re-treated and re-interpreted) during the recalling processes.
I am looking forward, with Stefanos help, to develop my model towards a comprehensive complicated system that can numerically/ quantitatively define the nonlinear generative memory.
About two weeks ago, I discussed with my residence mate Stefanos, our common interests in order to begin to build an interdisciplinary artistic project that would adopt research-based artistic practices as a strategy to think and create.
In fact, we almost determine all main outlines of our project. The discussion began with considering our brain’s neural functions and how they may be used within an interactive context. But this is just the beginning, if we talked about such potential technical aspects, we also recognized that the conceptual frame has to be defined first.
Stefanos and me would like to try to generate a collective experience, using neural signals from multiple sources.
As Stefanos said; our project ties to collective phenomena in complex systems: the reveal depends on the emergence of collective behavior among the participants’ neural activity. As we dealt on this general idea, trough our work, we are going to suggest definition of some terms in our general statement in order to refine our project scope.
Actually, I suggest to focus on the memory as a mysterious collective phenomenon in our brain and how can we describe memory, as a preservative source or store, using language of quantum mechanics (is there any way to reinterpreted that store by the language of quantum mechanics?). Further, in order to refine our problem more, I would like to suggest investigating the role of our collective generative memory to generate knowledge: Is memory a source of knowledge (only a preservative source). Or it can generate new knowledge based its content?
I do not like to investigate this assumption physically, but I am sure that Stefanos is going to figure out an approachable way to deal with that investigation quantitatively, then after collecting appropriate material, I am going to design a digital environment based on Artificial Neural Network (ANN) in order to simulate our investigation in the form of an interactive art, in which viewers can examine their memories neurally and how they can build no-linear collective generative memory as a source to form knowledge visually.
I began to devote considerable time of my scheduled to work in, as I am extremely interested to create such critical investigation in order to emphasize the core role of the research-based artistic practices not only for the art but also for other several scientific domains. In the next blog, we are going to cover the basic principles of our project in order to frame its conceptual, technical, and experimental approaches.
In our last meeting, Diaa and I discussed the potential for further development of his neurofeedback setup. We discussed, in particular, ways to expand the contact established between viewers and objects, which itself becomes the artwork, along the lines outlined by Diaa in his posting from last week.
One exciting idea is to try to generate a collective experience, using neural signals from multiple sources. In this case, the contact will constitute a “multiplayer” art piece. It is particularly intriguing to tie the methodology with collective phenomena in complex systems: the reveal depends on the emergence of collective behavior among the participants’ neural activity.
Since the emergence of collective phenomena requires sufficiently large numbers of degrees of freedom (i.e., participants), one would need to employ modeling in order to ensure that the design of the experience is functional. Diaa and I discussed the possibility of algorithmic modeling to achieve this, so I have been thinking about ways to do that. Also, the simulation of this process could potentially become an artwork in and of itself.
One thing that comes to mind is based on the currently very popular tool of artificial neural networks. These are mathematical models that are employed in artificial intelligence to mimic, to some extent, neuronal activity and “learning”. A recent trend in this field has been to couple neural networks in either adversarial or cooperative scenarios. It is interesting to think in the next few weeks whether multiple simple neural networks can be coupled and whether such a system can be used to simulate the experimental setup sketched above.
In a previous entry, I mentioned some studies that aim to classify paintings based on quantitative measures. I wanted to follow up on that in some detail, because, even though this is an exciting nascent field of scientific inquiry, one must be mindful of the scope of each study to avoid misinterpreting the results. This is dually important in this context, as there can be misunderstandings on both the artistic and scientific sides. I outline a simple example below.
One of the studies I mentioned was the recent AI-assisted modern art classification of Sigaki et al. The authors use two information theoretic measures - entropy (in its information-theoretic sense) and complexity - that they calculate for each painting. They also use a machine learning algorithm to automatically assign predetermined labels, corresponding to artistic movements, to each painting in a large database. The authors then show that paintings belonging to a certain movement tend to aggregate in a certain region of the entropy-complexity plain. Interestingly, they were able to infer pivotal points in history that led to important stylistic shifts.
This classification scheme applies to large collections of paintings, and not to individual paintings themselves. Consider, for example, the data points the periods 1031-1570 and 1939-1952. These have almost the same entropy and very similar complexity measures, even though the artworks produced in these two periods can look markedly different - see example below. It becomes clear that individual paintings may evade or trick this method.
One can ask whether this classification protocol can be improved to detect differences in style on a finer level of individual paintings or small collections, such as periods of a painter’s life. I am intrigued by this question and have been thinking about ways to answer it affirmatively. One could, for example, try to assign to each painting an entire spectrum of values of a characterizing quantity, instead of just two (or a handful of) numbers. I think there is a straightforward way to do this. To outline what I have in mind, I first have to explain the tools used in the aforementioned work in a bit more detail.
The definitions of complexity and entropy in the diagram above involve the calculation of a distribution of probabilities of “minimal structures” appearing in the digital copies of the paintings studied. The minimal structures used were all the 2x2-pixel squares in each painting. Imagine, for example, that paintings are restricted to consist of only black and white pixels. Then, each pixel can take one of two values, black or white. The number of 2x2 minimal structures is then 2x2x2x2=16. The number of occurrences of each minimal structure is calculated by scanning the image, yielding a probability distribution. This distribution is then summed in two different ways, yielding the two numbers relevant to the analysis, i.e., entropy and complexity. Of course the analysis allows for more than just black and white pixels – the authors in fact use grayscale with 256 levels of intensity.
What I claim is that, instead of the summed values of complexity and entropy, one could use directly the distribution of probabilities of minimal structures to identify paintings. One can then use the notion of distance between probability distributions to define a finer distinction between styles. I hope to get the chance to try this out soon. Funnily, my collaborators and I used a similar notion of distance in recent work that characterizes the time evolution in certain quantum systems. Talk about science providing inspiration!
I would like to expand Stefanos’ explain regarding my interactive neural holographic puzzle.
In fact, this neural artwork has its conceptual roots in Marshall McLuhan’s writings, when he Claimed the media as an extension of our body. However, these kinds of neural interactive setups have exceeded what did McLuhan imagined when he talked about our body totally that can perform a wide range of physical activities, which could be expanded by using media.
But now after the human body is fragmented into its original systems and it became possible to use our neural or biological system alone in order to integrate them into an artificial system, the extension of our body, in this case, became stronger and effective than its source (the human body itself) to the extent that there is no longer a need to the human body as a unified physical form in the artwork (as an artist or as a participant) as long as there is a part of it (the neural or biological system) that can perform its functions, more effectively, in the artistic environment.
While the human body was a main source of the actions in the beginning of the interactive techniques, the neural or biological system of our body can govern the new generation of the interactive techniques as neural or biological actions. In this case, the consequences of the action (the neural or biological responses of the human body) are more magnified and expanded than the action itself inside the interactive field in its physical environment.
By this way the human body is no longer a physical part of the artwork itself, rather, its basic systems (neural or biological) are expanded as a part of the artwork, which can represent a form of a post-body actions in the interactive arts, in which the artwork system can build a direct, mutual contact with our body systems, excluding the physical activities, in order to integrate the core system of our bodies’ functions.
Similarly, in the first assumed neural-holographic system, the holographic puzzle must be considered as processes that can be governed by the participant's neural responses. By this way, the holographic processes are in direct connection with our brain. This contact, in fact, became the essence of the artwork, in which all information, derived from the entire artistic processes, has been decoded neurologically to help the participants to reshape their aesthetic experience and conclude an operational definition of their interactive actions neurologically.
Although the New-Media-art, as a separate artistic discipline, has achieved several paradigm shifts in contemporary visual arts, the most important one of them raises a challenging question about the nature of the aesthetic pleasure of artworks, not only in contemporary arts, but also in traditional arts.
In fact, the aesthetic pleasure exists in our feeling, not in the artwork itself. It is produced by a chemical reaction between the artwork (as a stimulator) and the viewer’s brain (as a receiver) to affect our feeling to make us feel pleasure. in media art, any artwork is a visual equation that can stimulate a kind of an aesthetic pleasure inside the viewers. Of course, every artist, traditional or contemporary, has his special way to create his visual equations. While the traditional artists prefer to write their visual equations in a complete visual/pictorial form, the interactive artists and media artists prefer to invite the viewers to participate in writing their equations.
Since the beginning of the processes art in the mid 1960s, several artists began to think about the core of the artwork, is it in the final product? or is it in the processes carried out to achieve it in the physical world?
In fact, in the contemporary, interactive media arts, there is no physical existence in the artwork itself. What we have already done is; we removed the external visual stimulator (the traditional artwork), and instead, we designed systems that can deal with the main sources of the aesthetic pleasure directly, seeking to prove that the artwork inside us, the artwork is what we feel, the artwork is what we think.
In another very interesting work, Diaa investigates the properties of holography in relation to concept art. A basic tenet of concept art is that the idea behind the artwork is the artwork. An early simplified illustration of this principle in visual arts is One and Three Chairs by Joseph Kosuth, in which the real form, the visual form, and the verbal form of an object are presented side-by-side. This presentation demonstrates how the idea can be disconnected from the object in a visual art piece, because it becomes clear that replacing the original real and visual forms does not affect the artwork.
Using a carefully designed experimental setup and a formal analysis of his experimental results, Diaa demonstrates that a holographic recording “stores” a multifaceted representation of an object. He then argues that holography, due to its nature, necessarily transcends Kosuth’s model, and must hence be considered a post-conceptual method. To prove this, Diaa elucidates how holograms simultaneously (a) recreate a perceived “reality” that is sufficient to represent the real form of an object, (b) function as a pictorial representation of the object, and (c) encode the object into a “verbal” (numerical, to be precise) representation, in the form of a recorded interference pattern.
What I find the most fascinating about Diaa’s work is that it required scientific concepts and methodology to formalize theory-of-art statements. It is also interesting that, in both Diaa’s and Kosuth’s analyses, it is the verbal (or numerical) form that emerges as the most fundamental or irreplaceable. I am intrigued to learn more about these two aspects in the coming weeks.
This week I have been familiarizing myself with some of Diaa’s work. In a really cool paper, Diaa describes an interactive piece that allows a participant to interact with a manifestation of their own brain activity, in order to solve a puzzle. The experimental setup translates recorded electroencephalograph data to a combination of LED light frequencies in real time. The LEDs controlled in this manner illuminate holographic plates that only reveal their content when the frequency of the light matches the frequency used to record the hologram. By observation, participants realize the role of their brain activity and control it to reveal the holographic image, thus solving the puzzle.
This is a brilliant use of neurofeedback: Diaa’s procedure ensures that an artwork is viewed in full only if the participant is in the right state of mind. In a more general and abstract sense, it is a nice example where interaction with a representation of one’s own mental state alters the “reality” experienced.
Attempting to interpret Diaa’s work in physics terms, one could say that there is a physical system (the participant) comprised of a single degree of freedom (the participant’s mental state) that is quantized, i.e., allowed to take three values (corresponding to alpha, beta, and gamma waves) in a one-dimensional state space (the space of brain wave frequencies). Since the artwork aims to reveal a holographic image, there is an implicit bias towards a particular ground/target state (the gamma wave state). Through the activity, the physical system is expected to relax to its ground state stochastically (via a trial-and-error process).
In this language, Diaa’s puzzle becomes a stochastic optimization problem. Such problems are ubiquitous in science and range from trivial to extremely complex. One common complicating factor is the presence of many degrees of freedom. For example, in Diaa’s setup, this could be a second monitoring process of, say, the participant’s pulse. The participant would then have to not only be in the right mental state to reveal the hologram, but also in the right physical state. More interesting are cases where there are more than one participants in such a process. A simple example is Mindball, a game where two opponents aim to “out-chill” each other to win, as shown in the video below.
It is intriguing to think about collective phenomena that can emerge when the number of degrees of freedom (e.g., the number of participants) is further increased. This leads to many questions. In the context of art, could one perhaps design collective experiences or “realities” by feeding a real-time representation of the collective mental state to the participants? On the scientific side, can one perhaps exploit such collective phenomena to solve problems, using some sort of “parallel” gamification? Could interaction with some representation of one’s thought process while solving a problem accelerate the solution? These are examples of things I am looking forward to discuss more with Diaa.
At last our skype meeting, Stefanos and me raised controversial questions regarding the meaning of reality. Thus in this week I am going to draft some points must be considered;
From a perspective of a contemporary Artist, Kristina Ask wrote:
“Art is tested at the boundary between reality and truth. From an aesthetic perspective, a falsehood can contain more truth than reality. For art, reality is an uninteresting phenomenon where concretizations and crisscrossing connections occur willy-nilly. Art can be said to exist in the sphere of truth, in defiance of reality.”
Within the context of her quote, Kristina Ask shaded light on a very radical question about the differences between “reality” and “truth” - as they can represent two sides of a knowledge band- and how the Art ranging between these two sides.
Although Kristina raised this statement in somewhat general sense, I would like to reinterpret her quote within the context of my accurate specialization (Sciences of visual Arts and New-Media Arts).
The hidden statement in this quote may damage credibility given to the old assumption that “the first purpose of art is to reproduce nature and life,” in which very old and traditional artistic trends were aiming to mimic visual reality within pictorial sense.
On the other hand, it lends credibility to the assumption that “the first purpose of New-Media arts and experimental arts is to investigate, examine, and reveal the truth of all aspects of our nature and life.”
This is considered a core shift from mimicking reality to investigate the truth of that reality, while the first refers to the field/sphere/realm which you visit in order to check an empirical statement, the second is an attribute to a statement - the information that this statement can or has been verified.
Within this context, Interactive, generative, or experimental artworks shift their focus from the visual entity of the artwork to the processes themselves used in order to produce this artwork.
Since this shift has been adopted, the artistic practices-based research became veiled to be used, on the other hand, within scientific investigations.
At the royal academy of the Arts London, the exhibition entitled (From pencil and paper to virtual reality: working from life today) shows how today’s artists are rethinking of reality in innovative, unexpected ways, in which artistic creation processes are rebuilt virtually.
For me, I have been integrated in several scientific inquiries in order to build neural, physical, and biological setup systems that can examine several scientific concepts and the truth of their reality instead of only its reality. In this way –like my residence mate Stefanos wondered - I believe that one can embed the solution of some hard science problems into the processes built through artistic practices-based research.
In his blog entry from last week, Diaa touches upon the issue of directionality in the exchange between art and science. Apart from providing inspiration for artists, science often entails procedures that generate data that can be visualized (or sonified) artistically. These procedures are frequently used as tools for generating art. This has the potential of advancing both artistic methodology and science popularization (although, as Yana insightfully points out in her blog entry, it can also lead to inadvertent caricaturing).
Like Diaa, I am interested in science that has art as its input. Can the scientific study of art reveal hitherto unknown elements of art and its creation, and perhaps even elements of the nature of human creativity? I think this is an important question to ask. Below I discuss a few inquiries I found in the scientific literature that I think are relevant.
Perhaps one of the most famous examples of modern science applied to art was the discovery of fractals in paintings by Jackson Pollock. I find this case important, because it was of practical consequence: quantitative analysis using methods of physics was used to correctly distinguish between original Pollock pieces and fakes.
Quantitative studies are often focused on large-scale categorization of art. Very recent work has used the notions of entropy (in its information-theoretic sense) and statistical complexity to categorize visual art and to extract trends in its evolution throughout history. Visual art also offers an important benchmark for computer vision. Methods from machine learning can be successful in style recognition and style transfer, and can even generate some pretty good abstract expressionism. Despite these successes, extracting meaningful context out of text or images is still a great open challenge in artificial intelligence.
What else can we learn by analyzing an art piece with scientifically rigorous procedures, apart from what style it is, in which period it was created, how complex its structure is, or whether computers can adopt and recreate its style? Like the fractal dimension of Pollock’s works, could it be that there are “hidden” dimensions in other artists’ works that we have not appreciated yet? Can the discovery of these dimensions teach us something we did not know about their process or their motives? I think it is very intriguing to explore art in this way. One can only imagine what structures may be laying undetected in masterpieces, cunningly and purposefully (or accidentally!) hidden there by the creators.
In our second week, I would like to shed light on a very basic level and illustrate how scientific procedures interact with artistic creation processes. In fact scientific thinking in visual arts have very long historical roots, in which artists deeply contributed to generate knowledge. Leonardo Da Vinci, for instance, emphasized the corpuscles as a unit of light before Newton by 200 years. Light as an element in visual arts has passed several phases to be treated by artists’ styles and skills.
However, one of the most important significant shifts regarding using light was implemented by Impressionists in their first attempt to literally apply the Newton's light law. They only used the original seven color bands of the light spectrum separately without any level of mixing in order to figure out our colored vision consisted of those seven bands.
When Claude Menu painted rouen cathedral, he definitely had not intention to draw the cathedral itself, rather, he was looking for monitoring the light’s waves changes through the daily hours and how these changes affect our visual perception. Indeed his attempts were artistic practices-based research! Not just paintings!
Although this considerable shift, the final work still limited on the canvas. What happened in the New-Media Arts is leaving the illusion of reality and concentrates on the function of reality. By this shift artworks could be used to generate real investigations as they are built based real functions.
In this way, we transferred from illusion to the function and, therefore, this transition has lead to paradigm conceptual, technical, and aesthetic shifts.
In fact, I am not talking about the history of light usage in visual arts, but i am taking it just as an example to show how the artistic creation processes converted from skills of illusion to skills of building reality, from mimicking the visual appearance to mimicking the real function of the object. Within this context, the artwork became as a mixer that combines several disciplines to reinterpret them in a unified setup in order to generate new generation of knowledge based real experience, in which audiences are a part of it.