Whilst the AI version is without any modifications, it is worth noting that the human transcript was a result of several changes and amendments, and was made in conjunction with the interviewee which also shows the need for direct human intervention to generate a satisfactory version.
Yannick Hofmann is an interdisciplinary artist and curator. He works at ZKM | Center for Art and Media Karlsruhe as a project lead in the field of research & production and teaches at the Media Arts & Sciences department of the Darmstadt University of Applied Sciences.
Yannick Hofmann (YH)
Klaudiusz Ślusarczyk (KŚ)
YH: My name is Yannick Hofmann, I work at the Hertz-Lab, which is a trans-disciplinary research and development platform at ZKM | Center for Art and Media in Karlsruhe. We operate across genres, in an interdisciplinary fashion, bridging many different artistic disciplines and following many different artistic and scientific concepts which we then translate into different kinds of media, be it music or sound art, visual arts, or any form of digital or computer based art that you could possibly imagine to implement these types of projects in both exhibitions as well as events.
KŚ: Could you tell me something about one of the current projects, I see that you have many projects at present, but is there a project that you are currently involved in or had some input?
YH: At the moment, we are working on machine learning and data-driven applications and their use in artistic practice, but we have many different fields of work, ranging from spatial sound to XR technologies and computer vision.
KŚ: Is your background more technology-based or is it more artistic or research-based?
YH: I have an artistic background, especially as a media artist, but I also have a sound knowledge of technology-oriented media theory. I would say it is a kind of triad between theory, artistic practice and technological implementation.
KŚ: Both as an artist and a researcher, I actually have a couple of questions I am interested, may I ask these?
YH: Of course.
KŚ: I am working on a project at the moment that takes on the idea of selfhood or how we identify and experience ourselves within the current presence of new technologies and the omnipresence of media orientated activities… [description of self maintenance project] … another research direction looks at identifying human traits that can be replicated by intelligent machines. One of the things in particular that I am interested in is human speech — and this is taking on an idea of Jacques Lacan — between a lack and that which drives us to speak. The question whether due to the impossibility of articulating desire becomes a motor of speech itself is what interests me and I wonder if this is one of those pursuits: an intelligent machine cannot replicate. My question to you, and from a technological perspective is, can an algorithm be created which is satisfied with not achieving a satisfactory result, not achieving a goal. In simple terms, can an algorithm function on the premise that it never reaches a goal that it is set out to do?
YH: I think, I have to circumnavigate your question in a way. I was thinking of the example of neural network models which potentially run their training epochs forever and don’t exactly know when it is best to stop the training procedure. This would be one example — you train the neural network or, better said, you train the model and feed it data. You could possibly train the model forever, until you decide to stop the process or you find that the model is fit enough. Potentially, however, it could train forever. So this would be one example for an algorithm which would probably be fine with not having the very best result simply because it does not know what the best result is. This happens when a GAN[generative adversarial network ]for instance, is lacking criteria for identifying the best relation between discriminator loss and generator loss.Though, in general, in programming you have specified goals to achieve. If you declare, for instance, to create a loop, you must feed it with criteria until it will calculate or repeat the loop.
KŚ: So I guess, it is possible to think about an algorithm that functions on the idea that it never reaches a goal, or that its goal is not to reach a specific desired objective but to be in a constant drift, so to speak?
YH: Yes, you can basically program ‘everything’, you just need arguments which are common enough that the algorithm is able to consider many different actions. But I think it is a very specific question in each case.
KŚ: I recently heard of an experiment where two bots were set up to communicate just with each other. And that, in the end the language of communication between them became so abstract for us that we could not understand what they began to transmit to each other. I was wondering, and this is how I understand that — maybe there was a certain type of communication that we just don’t understand that was happening between them. What I am interested in is the question whether it is possible for one of the bots that operates within an algorithm not to respond to the other. For example, like human-beings when they have a conversation and one person chooses not to respond, or not say anything. In other words, is there a possibility of one of the bots having a choice not to respond, a sense of free will we could say? Is it maybe possible in the future of machine communication? Take my earlier example, would it be possible for one of the bots to decide that they don’t want to respond, not as a glitch but a decision?
YH: In my opinion ‘free will’ is the wrong term. You need to give the machine the option ‘don’t answer’ as an option to choose from, you have to program this choice. For a chat situation with a chat bot, for instance, you feed the chat bot with different data of conversations. Basically a chat bot works in the way that you write, say, many different possibilities to ask the questions ‘how are you?’: ‘how is it going?’, ‘how was your day’?, etc. And then you would map this to an intent that matches basic human reactions. You also need to give it a possibility to ‘not answer’ which could potentially be an empty string. Then, you need to program an answer. You must give it criteria like ‘wait for one minute’ and if nothing happens repeat your question or phrase it differently, but this you need to tell the machine. However, I am not too confident that there will be real free choices the machine will make.
KŚ: So the decision not to respond would still be a task, not a choice that would be made is response?.
YH: The thing is that if you train a conversation with a bot using text data, the problem already starts because it doesn’t really notice when the first bot says something like ‘how are you?’. The second bot can’t decide not to respond on its own. Instead, the machine will then realise that there will be another question on top of that or another comment. It just won’t understand there was a decision to make on not answering properly. All of this needs to be trained.
KŚ: You talked about this idea of training, but are you also talking about that there are some particular parameters that are being set but with time these parameters become more flexible, or more elastic in terms of decision making, or do they always remain strictly within these parameters?
YH: Actually, they don’t take time into consideration unless you program them to take it into consideration. For instance, if there is a lack of answer, then, after maybe 30 seconds, phrase it differently or ask a different question. Tasks like this need to be programmed into the script infrastructure of the chatbots system.
KŚ: Earlier you mentioned something which was very interesting, namely that the bot responds to the intention of the speaker, and we know from linguistic research that language communication contains that what is said and that what is meant, in other words, one can have a conversation, and the intention of this exchange could be less apparent for either of the speaker, for example. When the intention or that what is meant is elusive, how does an AI respond to that. If we come back to the chatting bots example, how do the bots actually pick up on the intention rather than the actual literal meaning?
YH: Well, basically you have to teach the bots in the very beginning of training, you need to take the machine by the hand and just go through many different conversation data sets to let them learn and gain experience. For the machine itself it is very hard to get the actual intent if it is somehow hidden, for instance. It really needs to get trained on the hidden meaning. I don’t have a good example for a sentence with a hidden meaning, but just imagine this type of sentence. You still need to map the intent which is actually meant, and I am not too confident that we will see such [solutions], without human assistance, [performed] in an unsupervised way during the next few years. I think this will take some time for the machine to develop some sort of own will.
KŚ: Do you think that it will be possible at some point in the future? A classic example in human communication is between partners in relationships, where your partner asks you to do something and you don’t really want to do it, but because you respond to their need, or you want them to feel better, you say ‘Yes, of course i will do this for you’, whilst the initial intention was ‘I don’t want to’.
YH: At the moment, as far as I know, this is not possible, but with my ability of imagination I think that in the future it could be possible in a way that I don’t have exactly in my mind right now. At present, it is not possible without an actual intent and mapping a manual annotation. For now, it would have to be done by a human who says ‘ok let’s type the phrasing of that question or comment and it actually means ‘this’. Even though the machine saw many kinds of comments or questions which were mapped to another intent, it is not possible without a manual annotation.
KŚ: I imagine that it is building up an encyclopaedia within which a machine needs to function as such. Is there anything that you are sceptical about technology in the future? A lot of people are saying that with the advancement of machine learning, the way we work or the way we identify ourselves through work will change, and that a large parts of the current blue collar sector of the community will not be able to work. This group of people will somehow have to be re-taught different skills or possibly a [guaranteed] minimum wage will need to be introduced for people who don’t have to work. For me this is a point that I feel somewhat sceptical about when it comes to making decisions as to whether we should go in this direction. Is there something like that for you?
YH: We could take the decision of not spending more time researching those kinds of algorithms, constructing or manufacturing those algorithms, but I think that it is not too easy to stop those kinds of developments. But, I totally agree with the kind of things you said about our concept of work at some point. It will dramatically change because basically all the decisions we are making are processes which happen instantaneously, like detection or object recognition, which are very basic examples. Let us just say that the decision whether this is a staircase or whether this is a blue elephant can be done very easily by humans, we just know it within a split of a second and a machine can be trained on such an operation quite easily. That’s why these types of working processes can become automatised and this will definitely lead to the loss of many jobs which we have had for the last century. They will be gone very soon. But, on the other hand, this does not mean that we will not have other kinds of jobs or different kinds of work because the whole concept of work will change. And the question is ‘do we fear this?’, this kind of development or this kind of change which humanity will undergo.
KŚ: Zygmunt Bauman speaks about the value of work. He suggests that the idea of having a job is actually a way to make life worthwhile for many people. He says that the idea of working is something that fills the emptiness of existence, and I wonder, if we take that away from a particular sector of the community to introduce [guaranteed] minimum wage for people so that they can exist without having to have to work, how will we find a way, a different way, that sufficiently fills this sense of worth and prevent this [condition] of being maintained in a kind of inertia, of comfort, of consumption, because you are given money in order to consume etc.?
YH: Work can also shift from physical, labour oriented work to more, let us say, ‘intellectual’’ work. There is maybe a future where working might also mean getting more knowledge, and this could give us a chance to develop or deepen our knowledge in such a way that we will have more time and we will have more resources at hand to develop ourselves further intellectually. This would be the ideal world that I imagine we then just shift into…
KŚ: … well, as you were just speaking I thought that because of coming back to the structure of language and how we are adopting this knowledge in advanced programming and new technologies, that it might provide us with further insight into how language is formed and structured. Thank you so much for finding the time to speak.
Google Speech-to-Text AI transcript:
YH: my name is Jana Kaufman I work with Zika and hats lab which is a transdisciplinary research and development platform here at the Zika and Cal school we operate cross java and interdisciplinary bridging many different artistic disciplines and following many different artistic scientific concepts which we then follow through different and kinds of media be it music or sound art be it the visual art speed in any form of digital or computer-based art you could possibly imagine and and then we implement those kind of projects both exhibitions as well as events for the more we bring out publications settling at the intersection of artistic and scientific research
KŚ: could you tell me about maybe like a current project that you are I see there is you know many projects that are open at the moment that's something that you that you have directly maybe worked on or some handsome input
YH: so at the moment we have sort of a focus point on sound art and so far that we are part of three large-scale EC funded projects which the settling in this domain so at the moment we investigate on the application of machine learning algorithms and into sound artistic projects respectively music related projects trying to push forward those kind of artistic disciplines.
KŚ: actually more from let's say from the technological expertise or more from our artistic or research-based work
YH: so I have an artistic background media artistic background though I also have deep knowledge in let's say technology as well as my media theory so I would say it's sort of a try us between theory artistic practice as well as technological implementation
KŚ: right so I have maybe some questions that I'm kind of interested in myself as an artist and researcher so can I ask you a few questions
YH: Italy so few Oscar
KŚ: yes so part of I mentioned earlier that I'm that I'm working on a project at the moment that maybe I'll tell you a little bit more about it it's basically for for some time now I have been asking this question of selfhood or how do we identify or how do we experience ourselves yes and particularly how new technology influences the idea of how we experience ourselves in the world … another research direction looks at identifying human traits can be replicated by machine yes or by an intelligent machine and one of the things that I've done or found in my research is that for example in speech in human speech hmm and this is looking at Jacques Lacan's ideas that there is always a lack that because desire can never be replicated in what we say yes but there is always something missing in speech and that is maybe the motive why we speak yes that we can never really verbalize or articulate desire okay and I wonder how if that is one of those things that a machine can never really replicate so my question to you is from a technological or perspective can for example an algorithm be created which is satisfied with not achieving satisfaction not achieving a goal in a way do you know what I mean in CERN whether an algorithm can function with the idea that it can never reach the the goal that it's set sets out to do
YH: I will probably now for the next minute circumnavigate your question in a way as part of the thinking as I was saying it's early in the morning so I'm sorry but I was thinking of one example like the generative adversarial networks which potentially run in epochs you could possibly run those epochs forever right and you don't know when you should stop mm-hmm during the training procedure this would be one example you train the network and you drain it well that's better say you train the model would you fit into the network you could possibly train it forever and you must then decide when you stop the process and when you say the model is now fit enough but potentially it could train train train forever so I think it this would be one example for an algorithm which would probably be fine with just having up the very best result because it just doesn't know what is the best result it's lacking criteria for identifying what's the best relation between discriminate I'll also generate others for instance though in general you have specified goals to achieve if you declare for instance a for loop then you must feel it criteria until when will it calculate our repeat of voila loop
KŚ: so I guess it is possible to generate the possibility that an algorithm functions without reaching satisfaction like satisfactory goal yes that its goal could be that it's unsatisfied with the result you have property
YH: you can and basically you can not hope ran and everything you just need to have arguments which are broad enough or which take into consideration which can take into consideration many different things but it's I think it's a very specific question in each case
KŚ: okay I recently heard of this experiment with two two BOTS were set out to communicate just between each other yes yes and that the result was that in the end there was the language of communication became so abstract for us that we didn't understand what they were communicating between each other yes I was wondering because this I understand and maybe maybe it's maybe there was a certain type of communication that we just don't understand that was happening between them but what I'm interested in is is it possible for an from one of the bots or algorithms not to respond for example like in human beings yes we can have communication but then there's a moment where you choose maybe not to say anything to say is this possibility of free will possible do you think in the future or not because they are set out to communicate yes this is their goal that we are is it possible that they may be a situation that one of the bots decides not to for a while
YH: I think free will look but then we probably the wrong word because when you need to give the machine or the but even the possibility to just don't answer mm-hmm so then this is as a choice suppose you'll have to program so a chatbot for instance has you you basically feed it a different data of conversations into the into the network so which basically means you have you yo did you once check into how
such a chat bot is created well I don't I'm not aware in terms of technical but so it so basically you you have for instance you've write many different possibilities to ask the question how are you how are you yes how is it going how was your day and then the answers let's say this you will then map to an intent and then tenders then basically getting behind your feelings right so you would then need to give it the possibility to not answer which could potentially be empty string for instance but thus you need to tell the Machine and then on the other hand you need to program let's say the answer you must give it criteria like okay wait for one minute and if nothing happens then repeat your question or phrase it differently but this you need to tell the Machine and I'd I'm not too confident that there will be sort of a free-will like the Machine does it on its own
KŚ: so the decision not to respond would still be a decision yes it would be will are like responding …
YH: the thing is if you train it on conversation text data then the problem already stands because you don't really see if it says something like how are you and then there won't be the response the machine will then just realize that there will be another question on top of that more another comment so it it just won't understand that there was a decision on the other hand side not to to answer properly you know all of this needs to get trains
KŚ: yes and you talk about this idea of trained yes so are you talking about that there's there's some particular parameters that are being said with time these parameters are more flexible or more plastic in a way in terms of decision making or do they always remain strictly within these parameters
YH: actually they don't take time into consideration unless you program them to take it into consideration like there is a lack of answer after maybe seconds mm-hmm prays differently or ask another question something like this needs to get programmed into the script infrastructure of the chat pod system
KŚ: earlier you said something which was interesting that it's that the bot responds to the intention of the speaker yes and we know linguistics that you know that there is meaning yes behind the signifier so that I can be saying something to you or you can have a conversation but there could also be another meaning that's that's laid down so the intention would be something different yes yeah not exactly as what the words how how does may I respond to that or how does this when if we come back to this chatting box how do they pick up on the intention rather than on the actual literal meaning
YH: yeah basically you have to in the very beginning while you train the chatbot you need to take the machine by the hand and yes just yeah go through many different conversation datasets so that it can get experience but for the machine itself it's very hard to get behind the actual intent if it's somehow hidden for instance it really needs to it then needs to get trained on the also on the hidden meaning like you need to put in let's say I don't have a good example for hidden meaning sentence with the hidden meeting so just imagine the senses whether hitting a meeting and then you need to you still need to map the intent which is actually behind it and and I'm not too confident that we will experience that it will individually without human assistance so in an unsub in an unsupervised way during the next years so I think this will take take some time for the machine to develop some sort of own will
KŚ: you think it is possible though in terms of that for example you know I mean a classic example can be between like a partner relationship yes that your partner asks you to do something and you don't really want to do it but because you respect partner you want them to do better you say yes yeah I'll do this for you but the intention is that or maybe not the intention but let's say the the real hidden meaning behind it is that I really don't want to do it
YH: at the moment it would not be possible as far as I know so but I have you our imagination and I think that in the future it will be possible in a way which I had on have in my mind right now right now I think it's not possible without an actual intent mapping and without a manual annotation I would call it done by a human who says ok this phrasing of that question or comment is actually meaning this even though the machine probably saw many of those kinds of comments or questions which were mapped to another intent so without a manual annotate annotation at the moment it's not so
KŚ: I imagined it that it's building up in an encyclopedia basically yes it can kind of function within that let's say that volume you know is there anything that you're maybe skeptical about the future and technologies there's something that you know a lot of people are saying that with the advancement of let's say machine learning etc that for example the JIT the way we work or the way we identify ourselves as working for example in a factory or etc will be in danger yes that we will not have the blue-collar workers as such and that there will be a large sector of the community that will not be able to work and that they will somehow have to be retaught different skills or possibly as was the the idea in Switzerland that there would be a minimum wage given yes for people who don't have to work so for me that's like a point of being a little bit skeptical about maybe making these decisions of whether we should go it this way is there something like that for you or the question is Rashad
YH: who could take the decision of not spending more time researching into those kinds of algorithms or respectively actually constructing a constructing or manufacturing those algorithms I think it's not too easy to to stop this kind of development but I totally agree with you or with the things you just said that at some point our concept of work will dramatically change because basically all decision-making processes which happen very instantaneous like that's a detection object recognition are just very basic examples but let's just say the decision whether this is their case or whether this is a blue elephant you have the see can be done very easily by by humans we just split of a second we know it mm-hmm the machine can get trained on such operations very easily so yes notice so and this will lead to the fact that many working processes can yet automatized and this will definitely lead to the fact that many jobs which we now have which we had but the last centuries will be will be gone very soon but which this on the other hand does not mean that there won't about other kinds of jobs or other different kinds of work because the whole concept of work I think it will change yes and so the question is do we fear this this kind of development or this kind of change which humanity will undergo
KŚ: well it's a does being a permanent speaks about the value of work yes that he says that the idea of of having a job is actually our way to make life worthwhile for many people yes that it's it's something that fills the emptiness of existence the idea that you work whatever it is yes and I wonder if we take that away from a sector of community and we introduce this minimum wage for people so that they can eke they can exist without having to work I wonder whether that's sufficient was sufficient or how will they find maybe not this meaning but how will they find is this sense that they are worth their life is worth something rather than being just kind of maintained in this state of inertia of comfort of of consuming because you're given money in order to in order to consume etc
YH: so I imagine work can also shift from let's say more physical labour orientated work to more let's just say brain work yes so that in the future work might also be to get more knowledge right so this could give us the chance to develop or to deepen our our knowledge because we just have now more time we have the resources at hand to develop ourselves further intellectually so this would be the idea world that I imagine that we then that we just shift mm-hmm but so instead well shift mmm let's say the medium of work in a way yes that we don't what do you think will be the future of text communication or basically is there a still place for text-based as we know it communication do you think it will move into a different sphere
YH: let's just say it's not the field where I know too many things being perfectly honest but when it comes to this yes texts I mean what what is text at all I mean if you if you see text as something which consists of a particular chromatic for instance a tremor yes then this is something which is also super important for machinery for machines to understand what we're actually saying so we did not a dollar shift away from from actual text yes in a way yes so just one
KŚ: in a way we're still using the alphabet in a way yes even it is pro coding
YH: I think that also programming has been made possible in a way whether this kind of programming which we now apply also depends on things like generative grammar I love by Chomsky for instance so it's very interrelated even as far as I know so what would be the future of text-based communication I don't know I have to admit I don't I don't know
KŚ: I was thinking just as you were saying that that maybe maybe with with because we're working or coming back to their structures of language in order to actually use them or adopt them in new technologies yeah maybe it's gonna be a way to understand how language is formed in the end yeah yeah structured language means okay I have no more like questions but thank you so much yes I'm a mouse