ChatGPT has been around for a while now, and some people just call it "Chat,". Deepl, which recently launched in South Korea, has also started offering subscriptions. The service itself is no different from Google's machine translation - Google's is free, Deepl's is paid - but Deepl's natural translation saves me the trouble of having to read the original text because of weird translations in traditional machine translation. The translations of articles from the New York Times or Financial Times are so clean you'd think they were written by local reporters. When I think of the scrawl in some newspapers, I wonder if an AI could write better.
The same goes for coding. I've been using Python for a while, but I'm not good enough at it to get the results I want, so it's always in my head. Chat solves that problem. You can just say, "Write me some Python code to graph Samsung Electronics stock over the past year," and for more complex code, you can use a tool called Prompt, which coaches the AI to give you the results you want.
The same goes for blog posts, where you can give it a few keywords and it'll write a long post that incorporates all the keywords you gave it, like Han Seok Bong sprinkling gold ink on a folding screen and then writing it all together, so you can't tell who wrote it. What used to be a marketing exercise of paying people who didn't know their stuff to write has become a one-minute job for AI, or more accurately, generative AI.
Translation, coding, and writing aren't manual labor, but they're things that used to be done by human hands, and generative AI is good at them. They still need human hands to produce commercial results, but they're just as good. I understand the concern that it could take away jobs. (Of course, humans will find new things to do that they haven't done before.)
With AI, we're able to do things that we used to have to rely on technologists to do, and my first thought was, "How do we get them to do it well?" Chat understands context, but the more specific you can be about what you want it to do, the more precise and on-point the results will be.
This is where prompt engineering comes in. For example, when you're asked to write a blog post with pretext such as "You're a competent blog marketer, you're fluent in Korean, and you're going to write something that's going to rank well in search engines". I've seen the difference in results between having someone write about something without this process and writing with prompts. You can pay for prompts, and there are already plug-ins for prompts in chat. It's becoming more common to charge a fee for the plugin, separate from the chat fee.
The question of how to make things happen is really a question of what to do. When technology matters, or when technology becomes a constraint, even the best ideas get stuck in a technology rut.
AI will change a lot of these things. You'll still need technicians for highly specialized technologies, but you can use AI to explore the possibilities of basic technology elements. If someone like me, who studied Korean medicine, wanted to draw a network of the relationships between the components of a prescription in the old texts, you would need a PhD, or at least a master's degree. Now, if you have good data and a little bit of programming knowledge, you can ask a chat to write code to draw a network. You can even show it some data shapes and it will code accordingly.
When the limits of technology start to disappear, it becomes a question of what to build, and what to build comes down to an exploration of humanity. What people need and what they write is hard to validate with data. Most data is retrospective. Analyzing data as a result of doing something is useful for interpreting the past, but it has limitations for predicting the future.
For one thing, human predictions are not just based on observations, but on forces that seek to change the status quo based on those observations. From the 1960s, we were told that we would run out of oil in 30 years, we didn't run out in the 1990s, and after the shale revolution, we're no longer worried about running out of oil because technological advances have allowed us to find new sources of oil, and we've become less and less dependent on oil for things like electricity.
We live in a time when the focus has shifted from 'how' to 'what'. 'how' is mostly multiple choice. The 'what to do' is subjective - there are some keywords, but it's a big blank sheet of paper, and you have to fill it in letter by letter. It's not something that artificial intelligence can easily answer, and even if it did, it would only be one of the opinions, because humans have to accept it and go in that direction for it to be meaningful. The quest for what people can accept is going to come back into the spotlight.
In the meantime, we've learned the language of computers, like coding, to make them do things. Now they understand us. If you don't speak our language, it's hard to get a computer to work. If you want it to work, you have to know what you're doing. I'm cautiously predicting that people who know what to do will be the ones who dominate artificial intelligence.
'사회문화' 카테고리의 다른 글
결국 증세로 귀결됩니다 (0) | 2023.12.21 |
---|---|
로봇, 인공지능과 우리들의 미래 (0) | 2023.10.05 |
생성형 인공지능 활용 능력은 인문학의 이해에 달려있습니다. (0) | 2023.09.04 |
전자책 e-Ink 리더기를 고민하는 분들에게 (0) | 2023.07.20 |
비대면 진료, 이젠 솔직해 집시다. (0) | 2023.04.21 |
한의학(특히 온병학), 사회문제, 경제경영 분야에 대해 글을 쓰는 한의사입니다. 제가 세상을 바라보는 시선을 펼쳐놓는 공간입니다. 오신 모든 분들을 환영합니다~