Skip to Main Content

Artificial Intelligence

An introduction to Artificial Intelligence (AI) and AI Literacy.

Values of AI Literacy

Knowledge and Skills

  • Knowledge of the underlying models and data sets used to create and train AI
  • Knowledge of AI tools, their abilities and limitations
  • Knowledge of the ethical concerns of AI use including privacy, piracy, bias, misinformation, misuse, environmental impact and academic integrity
  • Development and practice of skills for effective AI use. 

Critical Evaluation

  • Examines the functionality and features of AI tools to select the right tool for the right task
  • Scrutinizes the quality of responses and outputs to determine if they are relevant to the prompt and useful for the task
  • Verifies the accuracy of AI generated statements and citations by cross-referencing with credible sources.

Appropriate Use

  • Use is intentional; individuals are accountable for their use of AI
  • Use is ethical; decisions made in the use of AI mitigate unintended and negative consequences that may cause harm
  • Use is transparent; the use of AI is attributed and AI generated contributions to communication, creative work or research are identified
  • Use is effective; proficiently demonstrates AI skills such as writing prompts, refining strategies and recognizing when issues like bias or hallucination occur.

AI Literacy in Practice

Understanding the way AI models are developed to synthesize or analyze information provides a foundation for the critical evaluation and ethical use of AI. Having knowledge of AI development and training may also help its users to recognize AI outputs like deep fakes and other forms of AI generated content. The AI resource page has more guides, tutorials, and information to learn about AI. 

Critical evaluation involves examining the functionality of AI tools to determine whether it is appropriate for the task, as well as scrutiny of AI outputs for relevance, usefulness and accuracy. Evaluation may extend to investigating the development of a tool including the data used for training the AI model and the developer's policies for data that the user may supply through uploaded images, text prompts, and account usage. Here are some steps that can help when critically evaluating AI content and use:

  • Be intentional about using AI; set expectations to determine if AI is the best tool. Using AI may save time but it should not replace critical thinking in research, work, or the creative process. 
  • Verify AI responses for text and cited references using scholarly or trusted sources of information such as books and journals.
  • Test results using multiple tools, sources, and strategies to confirm the reliability of AI statements and reproduce results..
  • Make note of how AI was used in the research or creative process. Be clear about how AI was used in the process and finished work.
  • Reflect on AI use to determine whether use of the tool aided learning and was appropriate for the intended use.

Ethical AI use asks the user to consider the potential unintended and intended but harmful impacts of using AI. Effective use is based in knowledge and involves critical evaluation of AI as well as practiced skill in prompt and query writing. For more resources on effective use, visit the AI resources page.

At KCC, AI should only be used in learning activities as directed by the course instructor. Students should consult the syllabus to determine if and how AI may be used for assignments within the course. 

Using AI Ethically

  • Academic Integrity - AI tools that generate text and edit writing make plagiarism more challenging to avoid. Always cite AI generated text as a source when it is quoted or paraphrased and disclose how AI may have contributed to the writing process. Avoid using AI generated statements and sources without evaluating the information and verifying the source. Check out the Citation Basics guide for more information on citing AI. 
  • Accountability and Autonomy - AI has the potential to disrupt or replace human expertise and it is important that humans remain in control of how AI is used. Teachers and students have a shared responsibility to maintain the quality of the learning experience with honesty about their AI use and shared values for centering human interaction in thinking, problem solving, creating and communicating.
  • Bias - Embedded biases are often present in the development and training of large language models. Bias may be reflected or amplified in AI outputs and critical evaluation is necessary to verify information. Be aware that unlike search results, AI responses may be skewed or completely fabricated. 
  • Misinformation and Misuse - AI can be used to create or spread misinformation and the misuse of AI may be harmful to people. AI can produce misinformation through bias or hallucination, or it can be used to create content like deep fakes and clones with intent to distort the truth. Verify content is real before using or sharing, and acknowledge when AI was the source of information. 
  • Privacy - AI presents many challenges to protecting the privacy of individuals and their identifiable data. Do not provide personal information when using AI and investigate the privacy policies of AI tools before use to determine how data may be used, stored, or sold. 
  • Piracy - Like bias, AI piracy is rooted in the development and training of large language models. Copyrighted works may have been used to train the model without the knowledge of the original creator, or the AI user. Before sharing AI generated content, be aware that it may contain material or ideas that are protected by copyright.
  • Sustainability -  The development and training of AI technologies use large quantities of energy resources for computing and cooling. As AI use becomes more widespread, the impact on these finite resources will grow. Before using AI, question if its use is contributing to the process beyond saving time and avoid unnecessary use.