Learning.
Answer:
Artificial Intelligence (AI) is the science of building machines that can perform tasks
requiring human-like intelligence. Machine Learning (ML) is a branch of AI focused on
algorithms that learn patterns from data and improve automatically. Deep Learning (DL) is
a further subset of ML that uses multiple layers of neural networks to extract high-level
features, enabling breakthroughs in areas such as image, audio, and language processing.
Q2. What difficulties do researchers face when designing AI systems?
Answer:
1. Data Limitations: Many domains lack large, clean, and unbiased datasets.
2. Transparency: Models, especially neural networks, are often complex and difficult to
interpret.
3. Resource Constraints: Training large AI models demands significant computation power
and energy.
4. Ethical Challenges: Issues like fairness, privacy, and accountability must be addressed.
5. Adaptability: Models can struggle when facing unseen environments or data shifts.
Q3. Explain the concept and real-world use cases of Reinforcement Learning.
Answer:
Reinforcement Learning (RL) is a method where an agent learns by trial and error in an
environment, receiving feedback through rewards or penalties. This helps the agent
optimize its actions over time. RL is applied in robotic control, self-driving cars,
personalized recommendations, and complex strategy games like chess and Go.
Q4. What is Transfer Learning and why is it significant?
Answer:
Transfer Learning involves reusing knowledge from a model trained on one problem to
solve a different but related problem. This method significantly reduces data requirements
and training time. For instance, an image recognition model pre-trained on millions of
everyday images can be adapted for medical X-ray analysis with limited labeled data.
Q5. How does Explainable AI (XAI) improve trust in AI models?
Answer:
Explainable AI provides clarity on how AI systems arrive at decisions. It uses methods such
as:
- Feature relevance ranking to show which inputs influence predictions
- Visualization tools like heatmaps in neural networks
- Model-agnostic techniques such as LIME and SHAP that break down predictions for
human interpretation