Not compensating creators for the use of their content seems unfair, but making American companies pay royalties while foreign companies ignore that cost will stunt national technology growth. Do you see a solution where American tech can keep up without ignoring original creators?
When looking at the futures cone part of our project, why is it important to identify a preposterous solution when the whole idea of a preposterous solution is that it’s impossible and won’t ever happen?
Recently I read as case study of a bank that implemented Ai tool to make decisions on loan applications. This new system was very efficient but reduced the work of the loan consultants to data entry. Is there a good way for a company to implement these process altering AI tools without employees losing the sense of value for their job.
After working with AI models over time, it can be noted that the quality of my prompts have improved. When utilizing strong AI models, how does the quality of a prompt influence the effectiveness of AI-generated responses? Is prompting the most important skill when working with AI?
I’m Interested in the very recent politician and how he spoke for 25 hours. The reason being he has a whole booklet that was easily 100 pages long. Now in order to speak, he must have thought very hard about what he wanted to say. So the big question is did he use ai for any of it. Because no matter how impressive it is he must have used a little bit.
Do you think the US government will start to implement/condone AI use? For example, UK Labour leader Keir Starmer emphasizes that AI could make services more human by allowing workers more time for care and connection aspects of their jobs. Does the US have a similar culture that would promote AI use for these purposes?
Since wicked problems do not have one single solution, how do we or how will we ever know when we have made enough “progress”? Where is the shift from trying to solve a problem to just managing it?
I thought it was interesting that a group figuring out what to eat for dinner would be a “wicked” problem because it sounds like such a first world problem compared to the other more serious problems we discussed. I was also intrigued by the discussion of putting a stadium in somewhere and the problems that arise with it. What is your opinion on moving the basketball stadium here at Miami?
Let’s talk about how wicked problems relate to tame problems. To what extent can traditional problem-solving methods, designed for simpler issues (tame problems), be adapted to tackle complex problems like climate change or global inequality? I believe that tame problems rely more on convergent thinking, while wicked problems require both divergent and, ultimately, convergent thought processes. Wicked problems need ongoing adaptation, collaboration, and the willingness to revise strategies as new insights come out.
After Monday’s class I was left thinking what does ai think of itself and the future? And chat gpt responded “The future of AI technology will be defined by increasingly intelligent, adaptive, and autonomous systems that deeply integrate into daily life, revolutionize industries, and challenge existing social, ethical, and economic structures—demanding careful governance to ensure its benefits are shared while its risks are responsibly managed.”
Ghibli-style image generation with AI has gone viral., but it has also raised copyright concerns. Copyright law doesn’t protect styles, yet a distinct style like ‘Ghibli’ is part of an artist’s identity.
If a unique style is widely imitated, it could harm the original creators. However, granting copyright to styles is tricky—it lacks clear standards and could limit artistic freedom.
Should copyright law protect styles?
I think it would be interesting to talk about how AI is going to impact energy consumption and how they plan to only focus on the positive instead of the negative. It would also be cool to explore the potential challenges and ethical considerations that come with gearing AI for energy efficiency, making sure that a balanced approach to both innovation and responsible.
What are your thoughts on Elon and Neuralink Blindsight chip? Do you think it is going to be successful or a massive fail?
I shared your AI website with an entrepreneur in the pursuit of creating AI multi-agent systems. He said that he is creating systems for corporations that include an ideation agent, a reasoning/pushback agent, and a critique agent if I remember correctly. What is the incentive of multi-agents versus a single agent AI system?
I think at some point we are going to have to begin putting watermarks on photos that ai creates. Some of these photos are too realistic to tell that they are ai. At some point there will be a large misinformation spread with fake photos and or videos in order to get ahead of that now I think watermarks are going to be needed on images and videos.
If artificial intelligence reaches a point where it surpasses human capability in nearly every task, how might our concepts of work, purpose, and identity evolve, and what would society need to do to ensure meaningful human lives in an AI-driven world?
Is there a possibility that wicked problems could be solved through AI since it’s able to think in different directions and dimensions than the average human can? Is there a possibility AI could think out any negative repercussions?
The recently news about OpenAI’s new funding is very cool! I think it is amazing to see a AI company gather this much funding allowing them to push the limits of their current research and start new projects at the company to develop even more AI capabilities.
Given the rapidly changing and innovative dynamic of Artificial Intelligence, what kinds of physical appliances will likely adopt Artificial Intelligence? Will AI become implemented within home appliances, home entertainment, or transportation (cars, aviation, traffic signals)?
If generative AI continues training on content created by other AIs, will we eventually reach a point where the output becomes creatively diluted, like an echo of an echo, and if so, how would we recognize or prevent that?
This week I found it interesting to hear how Chat GPT and open AI finally worked to fully integrate Chat GPT and Dalle into one which makes it much easier to create images and doesn’t require me to two different chats. I am interested to see how long it will take for them to combine with Sora. I think it will be even easier to create the videos and better-quality images. This week I was interested in trying the Chat GPT deep research function and found it interesting in how it walks through its research and provided strong information with accurate sources.
AI companies are getting huge investments and high valuations. Some are nonprofits and others are for-profit entities. How should governments and investors monitor / intervene with AI companies to ensure ethical business practices?
Are there any ways to track the development of AI in real time? I am very interested in how it is advancing so quickly and feel as if every week there are massive advancements and would like to be aware of how far it has come.
We’ve talked about agentic AI in class before- is that what Amazon’s Nova Act is? I read a bit about how they came out with an AI that can interact with website pages on its own. In the articles that I clicked through, there was also mention of a ‘toolkit’ for building agent prototypes. What do we think the timeline would look like for that to be accessible to the general public, and would the general public be able to use these tools without some serious training? In other words, how probable is it that people could use these tools in every day life?
I’m interested to see how we will use AI for the new Wicked Problems assignment. I wonder how what different factors are going to be used now that we are finished with all the Method assignments.
My question is can AI solve these huge problems that we have in todays society all by itself. Or does it need some sort of human aspect of integration to do this?
What role should Ai play in education and learning?
Do you think AI or quantum computers will be able to solve wicked problems, like the ones we talked about in class, at some point in the future?
Seeing how AI already plays such a huge part in our day to day life positively, how much better can AI get before if becomes out of control and begins to impact our lives in a negative way?
Since I’m starting to come up with some ideas for some of the problems, there seems to be no one answer for each problem. After coming up with some of these potential solutions many of them could be seen as managing these problems vs solving them. I wonder how we could more permanently solve these problems.
Lets talk about the differences in ChatGPT’s image generation as opposed to models who already had the image generation offered as a feature
If AI can create things like art, music, or stories, should we treat its work the same way we treat human creativity?
When using AI as a tool to develop school learning resources, such as videos or songs, do you think it is the responsibility of the teacher to credit AI so the kids do not mistake it to be a human-generated resource?
I seen that chatgpt tried to copy itself and lied about doing that. How does this change how we look at ai.
How might the rapid advancement and integration of Ai into economic, and military soft and hardware change power dynamics and additionally what are some risks that we should be looking for and how can humans minimize these risks?
It is interesting to see the progression AI is making, especially at the rate its currently at. It seems as if AI is already surpassing human intelligence, but I wonder if there are any possible risks of having this super intelligent AI.
I have noticed that ChatGPT has been crashing frequently lately. Is there a specific reason for this?
How do we govern AI if (when) we reach the point where AI creates itself?
Will there be a point in time where companies advertise themselves as “AI free” in order to seem more approachable and appealing to consumers?
My group is working on our Wicked problem which is “The ethical boundaries of using AI in creativity.” and I just think it’s interesting how Wicked problems don’t have a certain set of solutions, and how they have no stopping rule.
How could AI be implemented to improve organization and coordination for airlines? Could an AI system do a better job at getting travelers to their destinations safely than a human? (Asking because my flight got cancelled this weekend and it sucked)
In one of my other classes, we’ve been discussing Self-driving cars and the “trolley problem”. SDVs don’t have the same ethical and emotional approach as we humans do, so I find it hard to rationalize how a car can make a split decision in a matter of milliseconds about whether to swerve and hit another car, possibly carrying a whole family or swerve out of the way of a falling tree with just you in the car.
What was the first AI platform to come about? How does it compare from then to now?
Will an AI begin to take over artwork as a whole if people use the AI to create the work through commands?
Be the first to comment.