Blog Post 8


Research and Inspirations

This week I have been coding to get the parachute movement working with your hands. This has involved many hours of trial and error with different code until I was able to work out how to get the desired result. During this trial and error process, I came across a variety of problems with my VR experience that I had not noticed before. For example, when receiving the Y value of each of my hands, I detected that for some strange reason, the values for each my hands were identical. This could have potentially been a major issue because if the value for my left hand was the same as my right hand, then there would be no way to tell if one hand is higher than the other. After spending some time looking at the ZigFu script and modifying different variables, I found a section of code that I commented out (commenting is used in code so you can write things within a script that is not compiled or used when the code is run) This process fixed my issue. From that point onward I simply had to take note of the data I was receiving and figure out the math and coding logic behind what I was trying to achieve.


Progress and Process

After discovering that the values of my left and right hand were staying the same, I spent a long time trying to figure out why this was happening and what I could do to stop it. I figured out that the reason this was happening could be attributed to a variety of minor settings in the ZigFu script that I could easily change. By commenting out an “if” statement within the ZigFu script, that was setting the position of my model, and using the settings as shown below, I was still able to move the avatar in virtual space and also get the correct data. In other words, the “if” statement within the ZigFu script was causing me to receive incorrect data – by commenting it out I overcame this issue. Below is the “if” statement that I commented out and the settings I used.

if (UpdateRootPosition) {
    transform.localPosition = (transform.rotation * rootPosition);
}

ZigSettings

Once I fixed this issue, I moved onto coding the parachute movement. First I ran a lot of tests to verify the data I was receiving and how to use it. Using some ingenuity, simple math and a couple of “if” statements, I was able to implement movement using the Kinect. Basically I would do a subtraction of the position of my left wrist from the position of my right wrist and apply that to a variable. This would either give me a negative or positive value, which would tell me whether or not one hand was higher than the other. I then created an “if” statement that said if one hand is X amount higher than the other than apply Y force in Z directions. After trying this the first time, I moved backwards instead of forward. This happened because the ZigFu script had inverted the forward and backward axis. This was an easy fix and I just applied the force in the opposite direction, so backwards force moved me forward. An example of this can be seen below:

if (diff <= -0.3f) {
    ScriptName.rigidbody.AddForce(Vector3.right * aForce);
    ScriptName.rigidbody.AddForce(Vector3.back * bForce);
    ScriptName.rigidbody.AddForce(Vector3.down * 5.0f);
}
if (diff <= -0.3f) {
    ScriptName.rigidbody.AddForce(Vector3.left * aForce);
    ScriptName.rigidbody.AddForce(Vector3.back * bForce);
    ScriptName.rigidbody.AddForce(Vector3.down * 5.0f);
}

Synthesis and Reflection

By this stage of the process, I was starting to get into the fine-tuning of my VR experience. With all the challenges that I had come up against, and resolved, I was becoming much more practiced in working out how to find solutions. I also found that I started to “think” more logically and cohesively when developing code. By this time, I had read so many different blogs and information on different online forums about coding, that sometimes completely unrelated posts just simply contributed to my understanding of the process. I believe that this deep skill-level is what allowed me to identify the potential problems in the scripts that I had not noticed earlier. As such, I can honestly see how working on this project has helped my overall knowledge and ability in this important technological function. These have been valuable skills to develop and with this knowledge, I have some useful competencies that I can to offer potential employers.


Advertisements

Blog Post 7


Research and Inspirations

This week I have been working a lot on coding and working with the data I am getting from the Kinect. I am currently trying to replace the use of the key “P” to be a movement using the Kinect. I initially used the “P” key as a basic input to be able to develop my code without the issues of using the Kinect. I used a mathematical function called a delta check to modify the data I got from the Kinect into values I needed. A delta check works by taking the position of something, in this case my hand, and then a second later taking the position again. You then subtract the current position from the previous position to get a value of how far that thing has traveled. I then used this delta value to activate my parachute.

I also have started working on getting my Oculus integration to work with my experience. After talking to a professor in the EDP program he said that it is very easy and is basically replacing a Character Controller. After downloading the free Oculus integration Unity provides here, and spending a little time with it I actually figured out that it is even more simple than that. The Oculus package provides a pre-combined set of cameras that work simultaneously, one for each eye.


Progress and Process

The first part of getting the delta check to work was being able to use the data from the ZigFu script. When trying to do this I encountered an error that I had not noticed before. When trying to obtain the variable from the ZigFu script, which came from within my own script, I was getting an error saying that the ZigFu script did not exist. I met with Bill, my coding professor, and he was unable to give me an answer as to why this was happening. The most likely issue was the code, but he checked it all and said that my code was correct, so it should be working. After spending some time online and sifting through information and posts on numerous forums, I found someone with a somewhat similar issue. And, thankfully, he offered a solution. The problem was due to the location of the file within the project folders. Because the ZigFu script was buried within the folders of the ZigFu package, the script wasn’t getting compiled until after I was trying to call on it with the GetComponent function. What this meant was that Unity did not even recognize that the ZigFu script even existed when I tried to call on it. This was an easy fix, all I had to do was move the script into the Standard Assets folder created by Unity.

Now that the data was coming through, I wrote the code for the delta check within the Update function. The data is called on every second and allows for instantaneous tracking of body movements. Below is the code for my left hand, and then I rewrote it for my right hand.

prevLY = curLY;
curLY = LyData;
Ldelta = curLY - prevLY;

Once the delta check was up and running I had to do some testing to find out what values I was receiving. I noticed that, due to the action sequence in my experience, that the user is constantly falling; therefore, my delta values are constantly negative. This meant I had to take these negative values into account when creating my “if” statement so that the parachute did not activate just from falling. Below is the “if” statement that I ended up using, allowing the user to activate the parachute by using either their left or right hand to pull an imaginary (virtual) drawstring.

if (grounded == false) {
    if (Ldelta < -2.0f || Rdelta < -2.0f) {
        ScriptName.rigidbody.AddForce(Vector3.up * force);
        ScriptName.gravity = gravChange;
    }
} else if (grounded == true) {
        ScriptName.gravity = normalGrav;
}

After spending some time looking at the Oculus package, I figured out that I could simply replace the main camera that I was using with the one provided by the Oculus package. After updating my Oculus drivers / SDK and replacing the camera, I had my experience running in the Oculus in no time at all.


Synthesis and Reflection

This was a week of highs and lows…and thankfully ended on a high! The substitution of the keyboard command to the Kinect data was a relatively easy process and pretty much went without a hitch. Next, the Oculus integration process was also quite straightforward. By this stage, I was feeling pretty pleased with myself as it was all coming together quite quickly. Particularly given that I’d been able to find a more efficient way of effecting the Oculus integration. These things were very rewarding and I thought I was going really well, until the next step suddenly presented a major stumbling block. My first reaction was to turn to my trusty sources of help: online forums and the professor who had taught me coding. A cursory search online didn’t turn up any solutions and when my professor was also baffled by the issue, I knew I was in trouble. So I returned to my search on the online forums, scrolling through multiple websites and posts that offered any potential information on the issue. When I finally found the posting indicating a similar issue to what I was experiencing, it reminded me of an issue I was experiencing previously that my professor had helped me work through. I was trying to pull data from a Javascript (JS) script into a C# script but it was telling me that the JS script didn’t exist. This is because of the way Unity compiles its scripts, compiling C# before JS, and was easily fixed by moving the location of the JS script. When I moved the file location of the ZigFu script, Unity found it and my code worked perfectly.

After implementing the fix, I pondered over the issue at length. I tried to make sense of the problem and understand how something as simple as file location could have such a profound effect on the way the Unity read the data. This allowed me to gain a better, more thorough, understanding about the way that Unity compiles information. After some time, I came to realize how important file location can be in the integration of different devices.


Blog Post 6


Research and Inspirations

In this past week I have been working on creating a more believable fall and parachute launch. After meeting with a previous professor of mine, Bill Depper, he suggested that instead of trying to modify the overly complicated Character Controller that comes with the Unity package, that instead I do some research online and create my own Rigidbody Controller.

Before getting too committed I thought it would be good for me to gain a proper understanding of the difference between the two. I found the video below which thoroughly explains the difference and when you would want to use one opposed to the other. This made it clear to me that a Rigidbody controller was the correct choice and why.

I also found this wiki page which not only briefly describes why you would use a Rigidbody Controller but also gives you examples of how to code them in both Javascript and C#.

I have also been working towards receiving the Y data of my hands from the Kinect. This was quite simple because of the way ZigFu coded their skeleton tracking.


Progress and Process

A lot of my process in creating a Rigidbody Controller with more believable movement and fall than the Character Controller was simply through trial and error of different variables. This is something that needs to be done with both Character Controllers and Rigidbody Controllers, but a Rigidbody Controller requires significantly less tweaking due to the fact that it uses in-built Unity physics.

Because I am using the Rigidbody Controller I needed to figure out how to use the AddForce function. After talking with Bill and looking at the extensive online help Unity provides I was able to implement a basic parachute effect when I pressed “P”. By using a boolean I had setup to check whether I was in contact with the ground or not, I was able to make it so that not only can the parachute only be activated when I am in the air, but deactivates when I land. Below is an example of how I managed to do this.

if (grounded == false) {
    if (Input.GetKey (KeyCode.P) {
        ScriptName.rigidbody.AddForce(Vector3.up * force);
        ScriptName.gravity = gravChange;
    }
} else if (grounded == true) {
        ScriptName.gravity = normalGrav;
}

When working with the Kinect data I immediately started modifying the ZigFu code to print the various variables until getting what I needed. Once I had the data I was able to simply apply that data to a variable in the ZigFu code and then using the GetComponent function in Unity I brought the variables into my own script to be used as I needed.


Synthesis and Reflection

As with much of my project, a lot of time has been spent working on a particular section or process within the experience, only to find out that it isn’t working quite the way I want. And, as has happened previously, I’ve had to scrap the time and energy investment in what I’ve been doing, and start from scratch. This is perhaps one of the most frustrating things and is hard to do, as you keep hoping that it is all just going to fall into place. If you try just one more thing, or another, it might all work. But, it’s all part of the learning process and has contributed to a deeper, and more extensive, understanding of the different programs that I’m working in. In particular, I have developed a much more proficient ability in writing code, becoming more skilful in working out scripts and commands, as well as trouble-shooting when commands don’t run. Getting multiple error-messages of fatal flaws can be very stressful, particularly when the deadline for completion is looming very near!

I also value all the information that is posted online in the different forums where people share experiences and expertise to help others. I plan to contribute to these online help forums by posting some of the information and reflections that I have developed in these blogs. In this way, I will not only add to this developing body of knowledge, but will also give back to the community that has been of invaluable help to me.


Blog Post 5


Research and Inspirations

After talking to my cousin about my project, who has also studied in the area of multi-media and design, he suggested that I look into trying to rip data from Google Maps/Earth to create my model of the Grand Canyon. He felt that this would probably provide a much more realistic graphic than the model that I had started to create in Unity, which lacked true-to-life imagery due to the limited range of terra and water profiles available without paying for textures. After a short time researching this suggestion, I found many sources of information from people doing this using a variety of third-party programs. In short, the programs enabled users to rip height-map data from either Google Maps or Google Earth, depending on the program. It was apparent from the examples that were posted on these online forums, that the finished product that could be created using this approach were far superior to what I currently had so I set out to create my own. Although I had already spent quite a number of hours in developing a 3D-model of the Grand Canyon in Unity, I felt it was worthwhile exploring this other approach.

I found the following videos to be of most help during this process.

which showed me the process in creating the 3d model

and

 which lead me to and showed me how to use Universal Maps Downloader


Progress and Process

First, I reviewed the blogs from the information sources to identify the best freeware to rip map-data from Google Maps or Google Earth. Using a powerful online tool called OpenTopography and a downloaded third-party program called MicroDem I was able to create a very impressive 3D model in 3ds Max with ease. I then used another third-party program called Google Maps Downloader to download high-res images that I wanted to use as textures.

The basic process to create the model was to input co-ordinates into OpenTopography, which would give you a file that MicroDem could read. Using MicroDem I was then able to convert that file into a grayscale image based on the height-map data. From there, I used the MicroDem file to create a texture and a “bump map” (which is a technique used in 3d modeling that simulates minor details – such as bumps and wrinkles) within 3ds Max. Next, I used a “displace effect”, which essentially acts as a force upon an object’s geometry to reshape it. This enhanced the surface and expanded a previously flat plane out to become the 3D model I am now using. Next, I exported that model as a .obj file and imported it into Maya to be UV Textured. Using the high-res images I got from Universal Maps Downloader, I textured the model until it created a natural and true-to-life image of the Grand Canyon. An example of the model is shown below.

ModelI imported the model into Unity and replaced what I was previously using with my much improved landscape. Unfortunately Unity will only accept a model with 6000 vertices or less. My complete 3D model is around 18 000 vertices. To get around this issue, Unity automatically split the model but not the texture. This meant that I had an un-textured model which didn’t look anywhere near as good as before. To fix this problem I manually split and remapped the object in Maya and then re-imported each of the textured sections.


Synthesis and Reflection

When I first began creating the 3D model, I used Unity’s inbuilt terrain, which is something that I was familiar with and had already developed a certain level of expertise. However, this function of Unity is not developed to create amazingly realistic environments. As such, I wasn’t overly excited by the results I was achieving. The rock faces and the water of the river below weren’t particularly convincing, which would also significantly diminish the true-to-life effect that I was hoping for. Although I had already spent quite a number of hours in developing a 3D-model of the Grand Canyon in Unity, I felt exploring the other approach using Google Maps or Google Earth had merit. It also took a little work to identify the most suitable textures that would all work together in the model that I planned to create, but the imagery from Universal Maps Downloader was much more natural and authentic. This would mean that the overall experience, when projected through the Oculus googles, would be much more believable and therefore worth the effort. That is why I decided to abandon my initial efforts in creating the 3D landscape model and use the other programs to create something that was compelling and believable.


Blog Post 4


Research and Inspirations

During this last week I have been trying to prepare for coding with the Kinect. This has involved a significant amount of research to investigate what program or programs I will need to get the Kinect to talk to Unity. A lot of the online community pointed me in the direction of a Unity Package called Zigfu. After getting this package to work I can see why the online community was so keen on it! Not only is it very well-developed, but the program actually provides you with some sample ‘scenes’ in Unity (basically examples) of what you can do with it.


Progress and Process

After a lot of time spent downloading various drivers, uninstalling them, reinstalling others, I finally managed to get the Kinect to talk to Unity. The Zigfu package provides a variety of Sample Scenes for you to play around with and get a feel for what is possible to do within Unity. After experiencing this, it is clear to me that this package is not only extremely well-developed, but perfect for what I want to do. Ranging from full-body tracking to just hands, this package is exactly what I need to create an immersive and believable experience.


Synthesis and Reflection

For a while now I have been working on fundamental issues of simply getting the different devices to function properly and begin the initial coding. To my disappointment, it has become more and more obvious that the reality is that it’s unlikely that I’ll be able to complete two fully-developed experiences. After some thought and reflection, I’ve decided that it will be better for me to dedicate all my time in the execution of a single, extremely well-polished experience with believable models and movement to allow for a truly immersive experience. By doing this I avoid the possibility of not being able to complete both experiences (one flight and one non-flight experience) up to a high standard and then end up with two mediocre experiences, which would take away from the overall outcome of my project.

Now that I have the Kinect talking to Unity, I plan to focus on getting the coding and movement aspects of my skydiving experience up to a functional standard. Once functioning, I plan to begin modeling my parachute. I will then swap out the simple objects I have been using to code with my models. Once again keeping in mind the possible difficulties with coding with Kinect, if I have not completed Kinect coding by Week 9 I will revert to my previous idea to have a time- based event trigger the parachute. I believe this will affect my experience in some way, but will not completely take away from the believability while still providing something that is fun and hopefully thrilling.


Blog Post 3


Research and Inspirations

This week my inspiration has come from some researching online videos of people doing not only incredibly daring, but unbelievably impressive things. An outstanding example of one feat is shown in the video below. Using a GoPro camera (or similar), the video captures a remarkable stunt by a guy wearing a wingsuit and shows his wingsuit flight through a ravine and a particular rock formation. I am thinking that a concept something along the lines of what is done by this dare-devil is what I would like to replicate in my Oculus experience.

I believe this concept, if done well, is something that could not only be extremely fun but would get the adrenaline pumping so that people would be asking to do it again and again.


Progress and Process

Now that I’ve been feeling better, I have been able to further develop my idea for what I am actually going to do within the Oculus. I have decided that I will create an experiential Adrenaline system where users can sky-dive or use a wing-suit, or possibly even base jump in a variety of places around the world. Some of my current ideas are jumping from cliffs, the Eiffel Tower, Grand Canyon, etc. I think that this would be extremely fun and provide people with an experience they otherwise would not be able to have. I also would like to create experiences that are not necessarily flight-based. These could include activities such as bike riding, longboarding/skateboarding, snowboarding/skiing or an alpine luge/bobsled. You would be able to control yourself within these experiences with the Xbox Kinect to create a believable and immersive experience.


Synthesis and Reflection

Basically my inspiration has pushed me in the direction of creating a sky-diving or adrenaline experience. I have chosen to work on the Grand Canyon. After visiting the Grand Canyon last year, I found this natural Wonder of the World to be not only breathtaking, but also extraordinarily beautiful. It is also a place that people do not get to sky-dive, even in real life. Therefore, if I can make it very realistic, then the experience is likely to have significant appeal. I think this will create an experience that will not only interest those not daring enough to sky-dive but also those who are. Furthermore, I think this concept could actually have a commercial application. For instance, at Lone Tree, there is a place that offers a simulated ski-diving experience. If this was coupled with the VR experience that I’m creating, then it would provide an incredibly authentic and life-like ski-dive experience that would be far superior to what they get now. I’m sure that would attract more people and more repeat business as you could alter the location, as suggested above. I’m also considering developing a second, non-flight based experience and think developing a simulation of a Luge would be fun as it is another experience that not a lot of people will ever get to experience.

So far I have managed to create my Grand Canyon landscape and have begun to texture it. I have also started some minor coding, attempting to create a believable parachute. Once I have got this down-pat, along with my completed landscape, I will begin creating the code for the Kinect. As a back-up plan, I begin to find the Kinect too difficult to code with I will still have my code left over from the parachute launch on key-press. I will then modify this to be a time-based event that will trigger after a certain amount of free-fall.

To start with I am going to focus entirely on completing my Grand Canyon experience before attempting to create a second environment (the Luge). I’m not sure how long it will take to develop all the different aspects of the experience, nor do I know yet if I’m able to get the different devices to work as I want them to and to communicate or interact with each other. These are important issues that need to be ironed out first. Furthermore, I will have to refresh my mind on the use of Unity and coding, so completing one experience first is a sensible strategy and enables me to work faster on the next project, if time permits. Once all coding is completed I will begin the modeling aspect of things to create 3D models that I will import into Unity to replace the simple objects I have been coding with as markers. This will include things like a character model, parachute, and so forth.


Prompt

The article, ‘The Rapidly Disappearing Business of Design,’ is an interesting article and, in a lot of ways, reminded me of an article I critiqued another class that I started (and had to drop) earlier this quarter. The article I’m referring to is one written by renowned macroeconomist, Robert J. Gordon, ‘Is US economic growth over?’ According to Gordon, since 1750 economic growth in the U.S. was driven by game-changing inventions during three industrial revolutions, which greatly improved living and working conditions in the U.S. However, these fundamental, one-time-only inventions may have come to an end and future innovations causing such revolutionary change are less likely. On this basis, Gordon concluded that the best years of U.S. economic growth may be over. In a similar vein, if human-centered design as the single biggest driver of social change, as predicted by Melissa Gates, and we are currently seeing a strong movement whereby design firms are selling out to large corporations and big advertising agencies, then maybe this trend too will be a contributing factor to Gordon’s prediction. Without design practice sitting as an independent field outside of large corporate takeovers, then is it likely that the practice in 2015 and beyond will similarly have a negative impact on the possibility of ‘revolutionary change.’ This issue is essentially what this article is claiming. It has never been more important for design to remain as an independent field so as to address issues ‘that sit outside a single corporate mandate or organizational footprint’ in terms of business competitiveness and social impact. However, with the huge push of corporate takeovers, defending design as an independent enterprise has also never been more difficult to maintain.

The next article essentially pans the concerns expressed in the article titled, The Rapidly Disappearing Business of Design, as an over-reaction. Bluntly titled ‘Design Studios Are Not Going Away,’ this article claims that a number of design firms will always remain independent enterprises as some designers will always resist becoming part of large, corporate in-house teams. And the number one reason for that? Not money, because they can probably earn far more as part of large corporation. It’s about the very essence of their trade: creativity. Large corporations can only offer a very narrow scope in terms of variety of projects as well as having strict limitations in terms of style-guides, the number of products and/or brands in the portfolio, and so on. Creatives need an environment to flex their creativity talents or they will burn-out. The article claimed that design teams need to rotate on projects every 4-6 months in order to keep ‘the thinking, motivation, and general design fresh.’ Large corporations may also find that while the idea of an in-house design team was good in principle; however, the reality may be that, over time, they don’t get the fresh, innovative new concepts that they were hoping for. The author may be right and this trend may be short-lived. Eventually we may see a return by large corporations to outsource design work to independent design firms.


Blog Post 2


Research and Inspirations

Amongst my research of the Oculus and what people are doing with it I have encountered many inspirational works and combinations with other technologies. Below are two links to notable finds that offered helpful inspiration and ideas for my own project.

The first video shows the use of Kinect and Oculus with a Playstation game, whereby the gamer uses his whole body to interact within the game, rather than just the Playstation controller. This makes it a more realistic experience.

The second video is less explanatory, but shows the use of Kinect, Oculus and the WII balance board to create a hoverboard simulation. The Kinect was used to track the player’s body movements and translate it into the VR experience. The WII balance board was used to move the virtual hoverboard. Finally, the Oculus offered the full immersion in VR. To me, I felt that the environment for this experience wasn’t well-developed—a very basic simulation—so that would have detracted from any real sense of reality when trying this immersive experience.


Progress and Process

Unfortunately my plans for this project were disrupted. Due to a potentially life-threatening snowboarding accident, followed by two periods of hospitalization, surgery, inability to study due to pain and medication as well as being unable to attend class, I decided to no longer pursue my original idea. Given the time constraints of the project, I do not think it would be wise for me to tackle a technology that I have never worked with before—the Emotiv Epoc headset—and therefore decided to make a switch to work with the Xbox Kinect, which I am much more familiar with. I still hope to pursue my previous idea in the future; however, given the health issues that I am suffering at the moment, it will be more prudent to develop an immersive VR experience using the Kinect and to work with data that I recognize. My revised plan is that the Kinect will track the body movements of a single person who is wearing the Oculus, and that data will be translated to move their figure around in virtual space. Although this concept is not completely new, I plan to create an immersive virtual reality experience that is truly unique and realistic, not to mention fun and compelling. I will continue to use Unity for my coding and environment creation as well as Adobe Audition for any auditory needs.


Synthesis and Reflection

Because of my hospitalization for a total of 11 days, I have been unable to progress my project as far as I would like. This major setback has made me re-evaluate the project and to utilize technology that I am familiar with versus technology I have never used before or been in contact with. Over the course of my degree, I have come to realize and recognize that being overly ambitious can affect my ability to complete a project within the given time constraints. For example, I might have a brilliant idea but I will not be able to actualize, execute and complete the assignment within the parameters and deadlines of the course. It doesn’t matter how inspired an idea might be, because the key to successfully completing a project is to meet all the assessment deliverables. This has been another learning situation for me, in which I have come to recognize that my abilities have been compromised due to my accident. Therefore, rather than risk being unable to complete a very ambitious project, I decided to change focus so that I will be able to complete a solid project that will still push the boundaries. At this stage I am conducting research to identify an innovative approach to combine the two technologies (Kinect and Oculus) to create something that will really “wow!”


Prompt

The article titled “Indie Musician’s Viral Tour Diary Was a Marketing Stunt for His Startup” provides a couple of interesting perspectives, especially for those thinking about starting a career in the music. Thankfully, at this stage, I’m not! Whatever perspective you want to take, the bottom line is that, like many of the arts, music is a tough industry to make a good living. Unless you become a mega-star, that is. It’s unclear whether Jack Conte, of the YouTube-famous band Pomplamoose, had pure intentions when he shared the posting about the difficulties of being an independent artist. He was slammed by other artists for being a whiny cry-baby, berated for staying in 3-star hotels while on tour opposed to slumming it, and for not trimming the fat off every meal and every support service used on the tour. And, to top things off, he was criticized for using the post as way of promoting his other company, Patreon. The truth of the matter appears to be that this other company is the way in which Conte makes any real money. But, whatever his motivation for the story that Conte posted on Medium, I think that basically the clear message here is that passion is must in an industry that is unlikely to make you rich. On a personal level, this issue is an important one, particularly in terms of career options after I graduate.

The next article, “The Pomplamoose Problem: Artists Can’t Survive as Saints and Martyrs“, is really a continuation of the same issue—the challenge by artists to make a decent living. This article was a little different in that it didn’t focus on putting down Pomplmoose member, Jack Conte, but rather attempted to send a message. A message that the culture in America is flawed: people are not prepared to pay for the cultural arts. “We love your music,” the public seems to be saying, “but we aren’t going to pay for it anymore.” In this digital era where open source ranges from everything and anything should be free for everyone, the idea of paying for music is increasingly considered unreasonable. This doesn’t mean people expect musicians to stop making music—they just expect that it should be available for free. How musicians are supposed to survive is someone else’s problem. As suggested in the article, “artists are supposed to be the ultimate saints and martyrs.” The point was also made that people are failing to see the ripple-effect of limiting the earning ability of musicians, as support services, such as backing bands, engineers, and lighting and sound technicians are all jobs that will be affected the more that Americans (and other nationalities) refuse to pay for the creativity and talent of musical artists. Perhaps, rather than “soul lifting”, this article serves a warning that each and every one of us who download music for free need to rethink these expectations.