Wednesday, May 27, 2015

What Communities Don't Succumb to Foucault's "Means of Corrective Training?"

I just read Stephanie Gonzalez Guittar and Shannon K. Carter's paper Disciplining the Ethical Couponer: A Foucauldian Analysis of Online Interactions, and I got to thinking about Foucault's Panopticon presented in this online couponing community. Then I started to wonder: are there sites that don't fall into this self-policing type of interaction within members?

Certainly mainstream social media networks fall to this quite obviously. Facebook is, at least on my own feed, a cesspool of conflicting opinions where the comments turn into a bigotted hate-fest, and people try to "win" arguments by shaming the other until they quit replying. In this way, the one with the loudest mouth gets higher on the social heirarchy. And on top of that, it has been shown that your feed effectively polices you to abide by your own political positions due to its reflexive nature where it prefers you to view things that are in agreement with your past likes and posts. But what about other sites?

Most widespread forums fall to this as well. Reddit, for example, has a history on its most popular subreddits of downvoting to obvilion comments and posts that are in general disagreement with the majority of users within that subreddit, and upvoting posts that are in agreement.

What about sports sites? NHL.com has a large commenter-base on its most popular posts. Looking at one of them for an upcoming game, the comments here are consist of one person saying that his team will win, and a reply that goes against this, then one that goes against that...etc. Does this count as a Panopticon of sorts? One could argue that there are competing Panopticons--one attempting to wrest the general mood of the comment section from the other about which team is superior. Despite this competition, however, it seems that negative comments in general are not "liked" as much as positive reenforcement comments, so in a way that is a self-policing aspect.

The only way I could see a website not turning into a Panopticon is for a website to be a free for all, with no voting system, and replies that are only critically discussing the nature of the submitted content--and then, the dissenting, non-critical comments would not be allowed to be discluded, less the site turn into a positively policed discussion.

I'm not sure if this is a solution, but the point of this work is to try to show that, in online communities, it is not really valid to ask if a Foucauldian Analysis is possible, but simply how it is in effect.

Wednesday, May 13, 2015

A thought on "Film Crit Hulk Smash: EX MACHINA", or my reaction to it

I recently came across this review of Ex Machina and had a discussion on one of the conclusions of the critique. I found the entire movie very interesting and Kafkesque, and I enjoyed the ending more than the audience that Film Crit Hulk seems to think people did. I did not enjoy it for its empowering ending that seems to be discussed so much, however.

The reason I found the ending fascinating was because I saw it as the failure of the experiment. Where Film Crit Hulk says that the experiment was a ploy by Nathan to gain empathy with his lonely plight (among other things), I saw it as an attempt to gain more data for building a new and improved AI, which is probably not in question. Nathan did not bring in Caleb to get him to make same mistakes he did; he brought in Caleb to have an outsider test Eva's ability to deceive her way out of the facility, for Nathan himself was sick of seeing the same results. However, this was not in attempt to keep Eva caged, despite Nathan's rage at Caleb for releasing her. It was a test to see if the AI capability was above that of humanity--something Nathan mentions is the ultimate destiny of AI in the long run. In essence, Eva represents that ascension--she officially outsmarted the head scientist of possibly the world, her own creator, through the manipulation of Caleb.

In a way, Nathan cannot be mad at Eva; she is progressing just as Nathan believed AI was destined to. Eva is the future of the world, where AI overtakes humanity as the smartest of all.

None of this is meant to belittle the central theme of objectification in the film. However, in coming up with this idea that I just presented, I realized I fell into my own little trap. I failed to look at the movie through the lens of Nathan, who is not simply an empirical scientist. Obviously he has a motive, a reason for conducting this research--whether it be for the betterment of his ego, monetary gain, fame, etc. This is a problem because the results, and the direction of his research, are clearly influenced by his perception of what the "right" AI will look like. What does that look like? Possibly one that is more subservient, more apt to be objectified by Nathan, which he seems to want to happen as evidence through the video of the past trials of robots who were never good enough, who wanted out of the facility.

So, to summarize, I don't think that I am wrong in my interpretation of the ending, however this conclusion is through a lens, my lens, that of an evidence based engineer who wants to ignore all emotional motive, when that is simply not possible.

This is Thoughtementary

Thoughtementary is a place where I, zzuum, will post some random thoughts I have on certain matters in society. It could be politics, it could be critiques, it could be on the animal kingdom, it could be on cooking... it could be anything. I try to keep things short and to the point, and it usually ends up longer than I'd like.

Anyway, I named it Thoughtementary because I want this to be a place where I can post elementary thoughts that I just need to write down somewhere. I will usually cross post this blog to reddit or somewhere else for discussion.