You get a bonus - 1 coin for daily activity. Now you have 1 coin

Objectives, types, problems of Artificial Intelligence

Lecture



The goal of artificial intelligence is to create technical systems capable of solving non-computational problems and performing actions that require processing of meaningful information.

I will save some statements about Artificial Intelligence (hereinafter AI), about its goals and objectives, maybe I will develop this topic in the section "consciousness"
Objectives, types, problems of Artificial Intelligence



What is the ultimate goal of creating artificial intelligence?

...to build the most compact system of abstractions that describes the incoming empirical reality as fully as possible...

The ultimate goal of artificial intelligence (AI) is to improve the lives and abilities of humanity by automating tasks, increasing productivity, and solving complex problems that people face in science, medicine, the environment, and other fields.

The main areas of AI's ultimate goals include:

  1. Automation and efficiency gains: AI helps automate routine or time-consuming processes, making them faster and more accurate, which increases productivity in manufacturing, finance, transportation, and many other industries.

  2. Solving complex scientific and engineering problems: AI can analyze huge amounts of data and identify patterns, enabling breakthroughs in fields such as biotechnology, materials science, energy, and astrophysics.

  3. Improving the quality of life: The development of AI is aimed at creating solutions to improve medicine (diagnostics, treatment, drug development), education (personalized learning), ecology (monitoring and protecting the environment) and other socially significant areas.

  4. Creating intelligent assistants: AI helps create smart systems and assistants that can support people's everyday tasks, such as controlling home devices, helping with learning, managing time, and solving household problems.

  5. Modeling and understanding human intelligence: AI research contributes to the study and understanding of the human mind and thinking processes, which can also help in the study of neuroscience and cognitive science.

  6. Creating a safe and sustainable future: The application of AI in areas such as climate science, environmental protection and resource management can support sustainable development, mitigate climate change and optimize the use of natural resources.

The ultimate goal of AI, therefore, is to improve human capabilities, optimize various areas of life, and promote the development of knowledge and technology for the benefit of society.

On the goals of artificial intelligence


...You mentioned only an intermediate goal. "...to build a system of abstractions..., to describe reality..." - but why? Why waste energy, which is not so easy to obtain, on a complete cataloguing of the external environment?

Perhaps the goals should be located inside a system with adaptive capabilities (like an AI system), and not outside such a system.

On the Basic Goals of Artificial Intelligence


the simplest thing for AI, as an example of an adaptive system, is to explore and catalog the external environment in order to fulfill the following basic goals:
a) find energy sources,
b) find shelter, comfort space,
c) having received a and b, predict the possible dynamics of the external environment, studying it, and look for more effective energy sources and comfort spaces than those that exist

as a result we do not need a disorderly archiving of everything and everyone, but only such archiving that corresponds to the basic goals and is energetically efficient


I am sure that:
1. the basic goals of AI should be very specific
2. AI can and should have a master. the master is the one or something that sets the goals. the master can be either the intelligence itself or something external to it.



About the owner of artificial intelligence


either the intellect is its own master, self-sufficient in choosing goals and meaning, but then it must necessarily have basic goals and a hierarchy of goals.

or the master is something external to the AI ​​system, then the intellect becomes part of a more general system, which already has basic goals and a hierarchy of goals, and simply follows the instructions of the system, either in the form of algorithms or in the form of direct commands.

The question of who is the "owner" of artificial intelligence is a controversial one, and touches on aspects of ownership, control, and responsibility for AI. Here are some key views and approaches to who can be considered the owner or "owner" of AI:

  1. Creator or Developer: Typically, the companies or organizations that develop and implement AI are considered its owners. It is the developers and patent holders of AI systems who control their use and have the rights to commercialize the technology.

  2. User or Operator: In some cases, control over AI may be delegated to end users or operators who have purchased licenses to use it. For example, an organization that uses AI to process data may be considered the owner of the data created by it, but not necessarily the owner of the AI ​​itself.

  3. Governments and regulation: As AI increasingly impacts society and the economy, many governments are introducing laws to regulate the ownership and use of AI. This is important for safety and human rights, especially when AI involves mass and personal data or decisions that could significantly impact people.

  4. Society and public interest: Some philosophers and experts believe that if AI has significant potential to change society, then responsibility for it should be distributed and, to some extent, it belongs to society. In this case, AI can be considered a public good, especially when it comes to technologies that are critical to humanity, such as in medicine or ecology.

  5. Responsibility and Ethics: The difficulty in determining the owner of an AI also relates to questions of liability. If an AI causes harm or makes a wrong decision, it is difficult to determine who is responsible: the creator, the owner, the user, or the AI ​​itself (in the case of complex autonomous systems). This question is still open and is the subject of much legal and ethical debate.

Thus, the "owner" of artificial intelligence may be the legal entity that created or owns the technology, but there is no clear answer in society, especially when it comes to questions of liability and long-term consequences.


Testing the owner of artificial intelligence


It would be interesting to come up with a "master check" for Artificial Intelligence:
for example, a check where the AI ​​gets energy?
- if from a socket and/or a battery, then the AI ​​has an external master who made the battery and socket and "feeds" the AI, and gives tasks for this food.

If the AI ​​can feed itself, then the question is more complicated.
If, for example, a solar battery breaks, the AI ​​limits its functions, saves energy, looks for raw materials (carries a supply of raw materials for the first time) and repairs the breakdown.
But here the question arises not only of basic goals - food and life - but also of the hierarchy of goals. If the AI ​​has a goal higher in the hierarchy than food and life, then you need to find out which one, in order to answer the question - is it its own master, or is it carrying out an assignment from an external source.

Today, all AI that have already been created are dependent, performing the tasks of their owners and creators.


On the comparison of humans and artificial intelligence


AI cannot be compared to a child - a child is a potential adult, a child is an intermediate system, in the dynamics of development to a full-fledged system - to an adult.
AI cannot be either a child or an adult without its own basic goals and hierarchy of goals.

Comparison of humans and artificial intelligence (AI) is a multifaceted issue, covering cognitive, technical, emotional and even philosophical aspects. This is what the website https://intellect.icu says. AI is developing rapidly, but there are fundamental differences between it and human intelligence, which can be divided into several key areas:

1. Information processing and computing speed

  • AI: Computer systems and AI can process huge amounts of data in fractions of a second, perform complex calculations and analyze information at speeds that humans cannot.
  • Human: The human brain cannot compete in computing speed, but it has a unique ability for associative thinking and intuition, which often helps make effective decisions in uncertain situations.

2. Memory and learning

  • AI: Can store and retrieve data almost unchanged, has precise access to accumulated data, and can learn from huge amounts of information. But AI perceives data linearly and is limited by the algorithms that its creators have laid down.
  • Human: Human memory is more flexible, able to forget and process experiences. People can learn through experience and intuition, as well as from emotions and the social environment, which allows them to better adapt to new or unexpected situations.
3. Creativity and creativity
  • AI: Able to create art, write lyrics, or even compose music, but operates on patterns and probability models, lacking true inspiration or motivation.
  • Human: Human creativity is determined not only by logic, but also by emotions, intuition and personal experience. This allows people to find non-standard solutions and approaches, to create something new without a clear dependence on previously obtained data.

4. Emotions and social connections

  • AI: In its current form, it does not have true emotions, although it can model and recognize them. It can only maintain social interactions within certain limits, such as responding to keywords or scenarios.
  • Human: Emotions play a key role in human life and decision-making. They help form deep social connections, provide support, show empathy, and work in a team. Emotions and an intuitive understanding of context also help a person find solutions in complex, multifaceted situations.

5. Adaptation and flexibility

  • AI: AI adaptation is limited by the scope of the given algorithms and training model. It can learn from new data, but this requires either refining the algorithm or significant amounts of training data.
  • Human: Capable of quick adaptation, not only absorbing information, but also rethinking approaches, forming new hypotheses and strategies. Flexibility allows a person to be creative and find solutions in conditions of uncertainty.

6. Ethics and morality

  • AI: Does not have an understanding of moral or ethical principles, acts strictly within the framework of established rules and tasks. Moral aspects and values ​​can only be taken into account as programmed parameters.
  • Person: Aware of ethical norms and able to evaluate one's actions in a moral context, which is especially important in matters where the decision affects the interests and rights of others.

7. Meaning and motivation

  • AI: Does not have its own goals and motivation - all its tasks and goals are set from the outside. AI does not understand the meaning of its work and acts only in accordance with algorithms.
  • Human: Seeks meaningful action, pursues personal goals, and can develop as a person. Human motivation is associated with the search for meaning, self-realization, and the desire to create something valuable for oneself and others.

While AI is capable of remarkable advances in processing speed, memory, and learning, humans retain unique capabilities such as creativity, emotional and social awareness, and the ability to deeply understand and find meaning. Combining the strengths of AI and human intelligence allows for systems in which machines assist humans, and humans contribute their expertise, values, and creativity, making their interactions especially promising for the future.


Why do we need artificial intelligence?

if we don't go into science fiction, then AI is needed to fulfill human goals, if we talk about basic goals, then to obtain and improve energy sources and human comfort zone.

for example, a trading robot as an example of AI is needed to obtain a special type of social energy - money. and this AI sits down at the terminal and chops cabbage for its owner, while the owner is doing something else, as he wants :),
if simplified, then AI is just a machine, like a windmill, but with greater capabilities for adapting to wind, loading and unloading, grinding quality, etc., but as a result, AI also threshes grain, rotating millstones instead of a person and for a person.

perhaps the difference in our approaches to the AI ​​problem is that
1. I simplify the AI ​​problem, reducing it to the problem of adaptation, and
2. You, perhaps (I could be wrong) romanticize the image and capabilities of AI as a system of absolute and universal knowledge, which does not need anything except to fill 1000 pages of an encyclopedia with new information every day.



Types of Artificial Intelligence

Artificial intelligence (AI) can be classified by level of development, method of interaction with humans, and functional focus. The main types of AI are as follows:

1. By level of intellectual abilities

  • Weak AI (narrow AI): These are specialized systems created to perform specific tasks. They do not have general intelligence and work within a narrow area, for example, voice assistants, recommender systems, facial recognition systems. Weak AI cannot go beyond the functionality provided and does not have the ability to self-learn beyond the given task.

  • Strong AI (general AI): A system with general intelligence that theoretically has capabilities comparable to humans, such as the ability to generalize, improve, and adapt to new conditions. General AI is still in the research stage, as it requires the creation of models that can make decisions in any conditions and learn without clear algorithms.

  • Superintelligence: This is a hypothetical form of AI that could surpass human intelligence in any area, including scientific reasoning, creativity, decision-making, and social interaction. Superintelligence is still only theoretical and raises significant ethical and philosophical questions about safety and control.

2. By type of interaction and method of learning

  • Jet engines: These AI systems have no memory and operate only on current data. They can only process information they receive in real time and make decisions based on it. For example, the computer program Deep Blue, which beat chess champion Garry Kasparov, was a jet engine.

  • Limited Memory AI: These systems can use past experiences to improve future decisions. Examples of such AI include self-driving cars that analyze road data and driver behavior. Such systems learn from the information they collect, but their memory is limited and they cannot form long-term ideas.

  • Theory of Mind: This is the AI ​​of the future that can understand human emotions, desires, and intentions and use this knowledge to engage in deeper interactions. Such AI can potentially be used to create personalized and empathetic systems that can understand the context and emotional state of the user.

  • Self-aware AI: A hypothetical type of AI that not only understands the desires and intentions of others, but is also self-aware. Such AI can be aware of its own actions and goals, make decisions based on introspection, and have a higher level of autonomy. Self-aware AI remains a subject of philosophical debate and has not been developed in practice.

3. By functional orientation

  • Machine learning (ML): This is a type of AI that uses algorithms and statistical models to learn from data on its own, without the need for explicit programming. Machine learning has subtypes such as supervised learning, unsupervised learning, and reinforcement learning.

  • Natural Language Processing (NLP): This type of AI enables machines to understand and interpret human language. NLP is used in chatbots, voice assistants, translation systems, and other applications that require natural language interaction.

  • Computer vision: Computer vision is a technology that enables AI to “see” and interpret visual data such as images and videos. Examples of applications include facial recognition systems, medical imaging diagnostics, and self-driving vehicles.

  • Robotics: This type of AI uses technologies that help robots perform physical tasks and interact with their environment. Systems with robotics AI can, for example, perform assembly in a factory or navigate in difficult terrain.

  • Expert systems: These are systems that simulate the ability of a person to make decisions in a specific area of ​​knowledge. They store a database of facts and rules on the basis of which they make recommendations or make decisions. Expert systems are used in medical diagnostics, finance, and technical support.

4. By degree of autonomy and control

  • Support systems: Systems that help a person in making decisions, but the final choice remains with the person. For example, recommendation systems in online stores offer products, but the choice remains with the user.

  • Autonomous systems: Systems that can make decisions and perform tasks independently without human intervention. Examples include driverless cars, drones, and autonomous robots in manufacturing. Such systems can independently adapt to environmental conditions and perform assigned tasks.

These different types of AI can be combined and applied in a single application, creating complex systems that can effectively solve a wide range of problems in different areas of life.

"bio-intelligence extenders" (to avoid confusion, I use your term) are real AI systems that take on part of the process of collecting, analyzing information and making decisions to achieve goals set by humans.

"real AI" (to avoid confusion, I use your term) is only human fantasies and ideas about a subject that does not exist.

About the fact that AI itself creates an algorithm. Self-learning is one of the problems of AI, but not the main one. They are all listed here http://en.wikipedia.org/wiki/Strong_AI - I give the English link, because the Russian one is long and clumsy, in the left panel of Wikipedia you can also go to the Russian page on strong AI.

The problem of creating artificial intelligence

The creation of artificial intelligence (AI) is accompanied by a number of complex problems, including technical, ethical, social and philosophical aspects. Here are the main problems facing AI researchers and developers:

1. Technical problems

  • Limitations in understanding and learning: Current AI systems learn well from large amounts of data, but often lack the ability to generalize or transfer knowledge to a new domain. Building flexible, adaptive AI capable of general learning (called strong AI) presents a significant technical challenge.
  • Complexity of algorithms and resources: Developing effective AI models requires huge computing resources and complex algorithms. This requires significant investment in data processing technologies, which can limit the availability of AI development.
  • Safety and reliability: AI systems can be vulnerable to failure, hacking, or unpredictable behavior, which is especially dangerous in autonomous vehicles, military, and medical systems. The reliability and resilience of such systems remains an important challenge.

2. Ethical issues

  • Bias and bias in data: AI is trained on data that may contain hidden biases that reflect cultural, social, or historical stereotypes. This leads to problems with discrimination and injustice, especially in areas such as recruitment, lending, and justice.
  • Transparency and interpretability: Many modern AI models, especially neural networks, act as "black boxes," making their decisions opaque to the user. This makes it difficult to understand why the AI ​​makes a particular decision, which can lead to mistrust of the technology.
  • Liability: If AI makes bad decisions that cause harm, the question arises: who is responsible? This is a complex legal and moral issue, as responsibility may be shared between developers, operators, and users.

3. Social problems

  • Job loss: AI can perform many tasks faster and more efficiently than humans, putting a number of jobs at risk. Automation could lead to significant job losses in a number of industries and threaten socio-economic stability, necessitating the adaptation and retraining of workers.
  • Data privacy and security: AI systems often use and analyze large amounts of data, including users’ personal data, which creates privacy threats and risks of information leaks.
  • Inequality of access: The development and use of AI requires significant financial and technological resources, which creates a gap between developed and developing countries, as well as between different segments of the population within countries. This can lead to social inequality.

4. Security and control issues

  • Creating Autonomous AI: If AI becomes capable of autonomous decisions, there is a risk of losing control over its actions. Fully autonomous AI may make decisions that are not in the best interests of humans, raising concerns about a “machine uprising” or unpredictable behavior.
  • Military risk: AI can be used to create autonomous combat systems and weapons, which could lead to unpredictable consequences if such systems get out of control. This also increases the risk of an arms race and global instability.
  • Threats from superintelligence: Theoretically, superintelligence could surpass human intelligence and become uncontrollable. This raises concerns among some researchers and philosophers, as such AI could be motivated by goals that would be harmful to humanity.

5. Philosophical and existential problems

  • The Problem of Self-Awareness: If AI becomes self-aware, it will raise profound questions about the nature of consciousness and the rights and responsibilities of such systems. For example, will AI be able to have rights, be aware of its own existence, and experience feelings?
  • Defining the Boundaries Between Human and Machine: As AI advances, the boundaries between machine and human intelligence are blurring, raising questions about human uniqueness and what makes us who we are. Will AI ever replace humans in creativity, empathy, or spirituality?
  • Ethical Issues of Cyborgization: If AI is integrated into a human body or brain, a new form of existence could emerge where the line between human and machine is almost indistinguishable. This raises philosophical and social questions about what it means to be human and what role technology plays in the development of our being.

The creation of artificial intelligence is associated with many problems that go beyond technology and touch upon fundamental questions of morality, law, society and philosophy. Developing safe, fair and transparent AI requires a comprehensive approach that combines the efforts of engineers, lawyers, sociologists, ethicists and philosophers.

I have a simplified idea of ​​the problem of creating AI as the following chain -

1. a more general property of self-learning of the system is creativity
2. creativity is a manifestation of the activity of a self-governing system in conditions of uncertainty
3. activity in conditions of uncertainty is based on the basic goals of the system and the hierarchy of goals

if the system is able to build such a pyramid of values: point 3 as the base - point 2 as the middle - point 1 - creativity, self-learning, as the top of the pyramid of values, then at the output such a system will have the ability to learn and create any algorithms, that is, we will get creativity, and what you call "real AI"

because all the problems of AI, and this

  • a) decision making,
  • b) knowledge representation,
  • c) planning,
  • d) training,
  • d) synthesis of all these types of activities


- all of them can be based on a foundation of basic values ​​and a hierarchy of values, and without such a foundation they cannot be glued together, because there is no answer to the questions of why AI needs to do this and not that, in what sequence to do it, what result to strive for, why AI should create an algorithm, why it should improve it, and so on.

If there are no goals, then there is no vector of efforts,
but only a large catalog in which knowledge of the solar system is equal to knowledge of how long to boil an egg



Problems in building AI



in the creation of AI it is still difficult to overcome the problem of "knowledge representation", an adequate description of the environment external to AI, the problem of "syntax - semantics",
that is, in simple terms, how can AI associate a symbol with its meaning,
for example, the word from the encyclopedia of knowledge "stone" with the real image of a stone (a set of properties and values ​​of a stone as a description of part of the environment - a stone is a threat of damage, a stone is a weapon of defense and attack, a stone is a mineral, a stone is a surface, etc.), with different meanings of the image of a stone depending on the tasks of the system (AI activity), with those areas of abstract knowledge where there is a concept of stone (cataloguing knowledge).

there is also a problem of whether the basic concept of AI should repeat the basic concept of human intelligence.
so far the solution to this problem sounds like this - AI should repeat human intelligence, because

  • a) can't do that yet
  • b) there are no other examples of intelligence

See also


Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Artificial Intelligence. Basics and history. Goals.

Terms: Artificial Intelligence. Basics and history. Goals.