AI Skills

AI Skills

AI literacy is now foundational! Progressive organisations investing in broad AI mastery across all levels will outperform those that concentrate technical skills in specialist teams. However, their efforts must balance with human skills like critical thinking to limit biases and ethical implications.

 

Our AI and digital skills training is designed for non-technical professionals who need to work effectively with new systems, understand their capabilities and limitations, and make informed decisions about technology deployment. We focus on practical application rather than theoretical computer science.

 

Public sector professionals need to understand what artificial intelligence is, how it works, and why it matters at work, before they can use it well. This introductory course builds that foundation confidently and practically.


Topics include what AI is and how generative AI differs from other types, common tools and practical examples, prompt writing and evaluating outputs, key risks and ethical issues, and first steps for using AI responsibly at work.

Knowing how to use a generative AI tool is one thing. Knowing how to direct it precisely, reliably, and efficiently is a skill most professionals have never been taught. Research published in Frontiers in Education identifies prompt engineering as a core 21st-century competency on a par with writing or data analysis. This course builds that skill through hands-on practice with real workplace tasks.


Topics include the structure of an effective prompt, how framing and context change outputs, chaining prompts for complex tasks, building a personal prompt library, and knowing when to override AI suggestions entirely.

Every AI system reflects the data it was trained on, and that data is rarely neutral. This course examines what ethical AI use looks like in practice, why bias is not always visible, and what accountability means when a machine makes or influences a decision. The Information Commissioner’s Office has established that organisations must actively monitor for discriminatory outcomes, not simply assume algorithmic processes are objective.


Topics include where bias originates, how to identify it, transparency obligations under UK GDPR, accountability frameworks, and building an ethical checklist for everyday AI use.

Dashboards, predictive outputs, and algorithmic recommendations now arrive in most professionals’ inboxes every week, yet the skills to interrogate them are rarely taught. This course builds the data literacy that non-technical professionals need to engage confidently with data-driven environments, challenge outputs that do not add up, and make better decisions as a result.


Topics include the difference between correlation and causation, how to read a dashboard critically, what a dataset cannot tell you, when data should and should not drive a decision, and how to present data-based recommendations with confidence.

Public sector professionals need to understand what artificial intelligence is, how it works, and why it matters in local government, before they can use it well. This introductory course builds that foundation confidently and practically.


Topics include what AI is, common tools and examples, how AI may support council services, key risks and ethical issues, and first steps for using AI at work.

Time is the one thing public sector teams never have enough of. This course shows how AI tools can reduce routine admin, free up capacity, and improve day-to-day productivity in ways staff can apply straight away.


Topics include everyday AI tools, drafting and summarising support, automating routine tasks, secure use of AI, and practical examples for council teams.

Councils generate enormous amounts of data but turning it into useful decisions is a skill most teams have never been taught. This course introduces how AI and analytics can change that. The Open Data Institute’s research on UK local authorities found that data quality and leadership capability are the two most significant barriers to effective AI adoption.


Topics include how data supports public sector decisions, analytics and predictive insight, using information effectively, local authority examples, and responsible use of data.

Residents increasingly encounter AI at every point of contact with their council, from automated responses to decisions shaped by algorithmic tools they never agreed to interact with. The Local Government Association’s 2025 survey found that organisational reputation and resident trust ranked among the top three AI risks identified by councils.


Topics include how AI is already used in resident-facing services, rights under UK GDPR and the Equality Act 2010, explaining AI decisions to the public, spotting when a system may be producing unfair outcomes, and escalating concerns appropriately.

When AI shapes public decisions, trust is everything. This course helps public sector teams understand responsible AI and the governance needed to protect it. The UK Government’s AI Playbook sets out ten principles every civil servant must follow when using AI, covering safety, fairness, accountability, and transparency.


Topics include responsible AI in practice, fairness and bias, privacy and accountability, governance principles, and clear communication around AI decisions.

Algorithms do not arrive neutral. Research published in the Journal of Public Administration Research and Theory documents how AI systems can reinforce and amplify existing inequalities at scale, often affecting the people who are already most disadvantaged and least able to challenge the outcome. This course goes beyond awareness to give delegates practical tools for identifying, questioning, and escalating algorithmic bias in their own area of work.


Topics include how bias enters AI systems, real UK public sector cases where algorithmic decisions caused harm, the legal framework under the Equality Act 2010, what meaningful human oversight looks like, and practical steps delegates can take the following working day.

In adult social care and housing, AI tools can shape who receives support, who is flagged as high risk, and who waits longest. Getting that wrong has consequences that cannot easily be undone. The UK Government’s AI Playbook identifies bias, privacy, and the absence of human oversight as the most significant risks in public sector AI deployment, and nowhere are those risks more consequential than in care and housing.


Topics include how automated tools are already used in social care and housing, what responsible use looks like in practice, how to document decisions made with algorithmic support, safeguarding obligations when AI is involved, and designing human oversight into AI-assisted workflows.

Most public sector managers are not being asked to understand AI deeply. They are being asked to lead teams through it, make decisions about it, and maintain public trust during a period of rapid change. A 2025 parliamentary report found that 70% of government bodies identified difficulties in recruiting and retaining staff with AI skills as a barrier to adoption, placing the burden of digital leadership firmly on existing managers.


Topics include leading teams through technology change, making sound decisions about AI without technical expertise, maintaining staff confidence and public trust, holding suppliers to account, and identifying when an AI system needs to be challenged or stopped.

Legal obligations around AI are already in force, and more are on the way. The Public Authority Algorithmic and Automated Decision-Making Systems Bill, introduced in the House of Lords in 2024, signals the direction of travel: greater transparency, stronger accountability, and enforceable standards for how public bodies use automated tools.


Topics include UK GDPR and automated decision-making rights, the Equality Act 2010 and algorithmic discrimination, the Algorithmic Transparency Recording Standard, procurement obligations under the Procurement Act 2023, and what to do when an AI system may be operating outside legal boundaries.

Contact Info

Mon - Fri : 9am - 5pm
contactus@landellfraser.com

Office Address

Landell Fraser Limited, Registered Company 6085832 Registered Office: Savoy House, Old Oak Common Lane, Savoy Circus, London, W3 7DA