Artificial Intelligence (AI) is primarily a tool for specific, data-driven tasks, far from the human-like intelligence often depicted in popular culture. AI systems excel at pattern recognition and decision-making based on statistical analysis, focusing on narrow tasks. Today, AI is already embedded in much of everyday life, influencing everything from online searches to healthcare, voice assistants, and predictive analytics.
Looking ahead, AI’s potential includes advancements toward Artificial General Intelligence (AGI), though its arrival is speculative. AI will continue to transform industries, automating tasks in the workforce and driving innovations in fields like healthcare. However, AI also raises ethical concerns around bias, privacy, and accountability, among others, as its data-driven decisions can perpetuate existing societal inequalities.
AI’s energy consumption and environmental impact are also important considerations. Christian developers should prioritize stewardship and transparency, ensuring that AI systems serve the greater good, supporting God’s purposes for humanity rather than distorting them.
For Christian ministries, AI offers new opportunities, such as evangelism chatbots, Bible engagement tools, and decision-making systems for ministry strategies. Yet, ministries must approach AI with caution, ensuring it supplements human efforts rather than replacing them. Ethical use of AI should align with biblical principles, respecting human dignity and fostering justice, love, and human flourishing.
“Artificial Intelligence” often evokes images of personas like HAL, Terminator, or Ex Machina. And while AI has aspired to create artificial persons, AI systems are “mostly used for infrastructure” that are both less flashy and more ubiquitous than what Hollywood has imagined. Moreover, far from the general intelligence many are dreaming of, most AI systems are narrowly focused on very specific and tractable tasks, like identifying cats or human emotions.
There is no agreed-upon definition of what AI is. Moreover, there are multiple ways one might try to define AI: (a) what it’s intended to do, (b) what it does or how it works, or (c) what it is. We might also do well to define what it is not.
The term “Artificial Intelligence” was coined in 1956 in a workshop at Dartmouth College in the US. Defined most broadly, Artificial Intelligence (AI) is a pattern-matching system that decides what labels to apply (or with LLMs, what the next work should be). So, if you have a collection of shirts that are stripes, polka dots, and plaid, you can ask the AI to sort them into each group. At its core, if you give an AI system a piece of data, it will respond with a label for that data. If you give it a picture of a plaid shirt, a well-trained AI system will return the label “plaid.”
Humans have designed fabric patterns of seemingly infinite variety, and human-level cognition distinguishes among them in various ways—not always systematic. However, for an AI system, all of these capacities would apply methods based around pattern-matching.
Pattern Recognition. So how does the AI determine what patterns to follow? This method is the unique quality that AI systems provide. Rather than having humans prescribe a pattern to the computer, they ask the computer to find the patterns on its own. This variable—called a “parameter”—is undefined by humans, but becomes defined by the AI’s own analysis of the data set. Currently, the most expansive AI systems have trillions of parameters, that is, trillions of defined patterns that these systems can look for within a collection of data. So whereas humans might discern a thousand (or a million) fabric patterns, current AI systems can track and identify trillions more.
Based on the pattern determined by the AI system, it can then take a new piece of data and suggest (or decide) what labels fit best. So, once a recommender system knows that “people who liked this book also liked this other book,” we can give the AI system a new book and it will use its collected patterns to figure out who should see this new book. Or, giving the AI system a pile of images or sound clips, it will use the patterns it has discovered to label each image so it can be searched for, or transform that sound into the text for a given language.
Besides the challenges of defining AI, there are multiple ways to categorize AI systems
There are multiple types of AI systems that are collectively dubbed “artificial intelligence.” Fortunately, some of these have clearer definitions.
Among the many applications of AI, you’ll hear numerous names thrown out: Machine Learning (ML), Deep Learning, Natural Language Processing (NLP), Computer Vision, Expert Systems, Robotics and Autonomous Systems, Generative AI, Affective Computing, Machine Translation, Facial Recognition, Recommender Systems, and more. Many of these could be reports unto themselves.
Recommenders. Similarly, when you get a book recommendation on Amazon, it’s based on this kind of pattern matching: “people who like this book, also liked…”. This decision-making and pairing is then used in increasingly complex ways. While developers intend for AI systems to imitate human thinking and choices, these systems often use very different methods than a human mind does. The degree to which humans can understand these methods is a common AI problem known as “interpretability.”
In addition to the field of computer science, AI systems and theories draw on many advanced fields, including psychology, neuroscience, linguistics, philosophy, economics, probability, logic, and more. As you can see, AI gathers many fields of research together, and they must have data that is readable to an AI system. Otherwise that knowledge is omitted, limiting the reach of AI’s “knowledge.” And while philosophy is mentioned, religious traditions are not well-represented. But many Christian computer scientists are making headway there too!
AI in Use. Since the release of ChatGPT in late 2022, AI has been dominating technology headlines. While popular culture often presents AI in futuristic ways, in reality, more mundane systems are already part of many peoples’ everyday lives. Even before ChatGPT, AI systems were being used commercially (and for fun) in numerous contexts, generally at corporate and industry levels, and as built-in services to consumers.
Besides ChatGPT, people are encountering AI systems in email spam filtering, online advertising, customer service chatbots, and song, video, or product recommender systems. Additionally, unbeknownst to most people, their experiences have likely been affected by AI in flight delay predictions, energy management and distribution, supply chains, sales forecasting, investments, legal forensics, and online article posting. AI systems are also increasingly being used in criminal justice, traffic management, medical insurance and diagnoses, and reviewing job applications.
Most, if not all, of these examples represent “narrow” or “weak” AI systems, since they focus on a few tasks or very specific domains of knowledge.
The future of AI is hard to predict. One need only to look at past predictions to see that. In the mid 1950s, the founders of AI had hoped to make significant strides toward solving the problem of intelligence in a single summer.
Artificial General Intelligence (AGI) or “Strong AI” refers to something closer to human intelligence. It would be the ability for a machine to handle any cognitive or learning task at human or superhuman levels, without focusing on a narrow learning task. We do not yet have this level of AI, and experts disagree on the prospects of if and when it might arrive. (A 2012-13 survey of experts gave it a 50% chance of arriving by 2050, and 90% by 2075.)
Meanwhile, beyond General AI, the concept of “Artificial Super Intelligence” is the stuff of movies at this point. It would not only require that General AI be achieved, but also that AI systems surpass human-level intelligence in every area including simple things like how to tie a shoe, which it has not yet mastered.
A Utility. Some regard AI to be a general platform technology, much like electricity, describing it as “a basic fact of our lives, an invention like money or democracy,” such that it will be used in and transform nearly every societal context. While AI is often not as tangible as cash or as visible as electricity, this prediction about AI is probably not far off. For that reason, it may create societal upheaval for years or decades to come before settling into the fabric of society. It will take time for AI’s role and ripple effects to reach a new equilibrium within the systems that structure society.
Language Translation. In our previous Trend report written in 2020, we predicted that “AI systems will likely become widely used in language translation within the next 3-5 years.” In January 2024, Samsung introduced AI-powered translation features for its newest smartphones, including a Live Translation feature that allows real-time translation during phone calls, both audibly and on-screen, for up to 13 languages.
Workforce. AI is more likely to take people’s tasks than their jobs. AI is likely to automate aspects of many kinds of jobs as it is integrated into the workforce. The benefits and consequences will not be evenly distributed, and will press Christians to continue to work for justice on behalf of marginalized people.
Healthcare, Physical and Spiritual. The 2024 Nobel Prize in Chemistry was awarded for AlphaFold, an AI algorithm used for predicting and designing novel proteins using AI. With AlphaFold, researchers can accelerate the discovery of new medicines.
AI systems have already been used extensively in healthcare contexts, including alerting people at risk of suicide, as well as their friends. Similar systems could eventually extend to spiritual health tracking as well, according to Nathan Matias. Groups like Tether and Cherith are seeking to develop discipleship apps with similar goals.
Governance & Policy. Governments are slowly beginning to propose legislation, but are even slower to fully bake it into law (Stanford 2022 AI Index Report, ch 5, p 7). Bad actors will continue to proliferate until legislation steps in. Nonetheless, regulatory frameworks are developing. Among them …
For more from Christian organizations, see our “Resources” section.
Environment. AI systems, like generative models, are highly energy-intensive. They have significant environmental impacts that are often invisible to users. For example, training GPT-3 used around 500 metric tons of CO2, equivalent to 600 flights between London and New York, while GPT-4 consumed a staggering 13,000 metric tons. Everyday AI interactions, such as using ChatGPT, can consume 3 to 30 times more electricity than a previous Google search did. Many lead to follow-up questions that further increase AI energy usage.
Christian stewardship calls for conscientious management of these impacts, beyond legal minimums, to reflect our responsibility for creation care. How might AI developers, especially within Christian organizations, make energy consumption and environmental costs transparent to users? Energy transparency would empower users to make informed decisions and choose to reduce their carbon footprint. Smaller, more efficient AI models should also be prioritized, and where larger systems are necessary, their benefits must be clearly justified.
Objectivity. Are AI systems objective in any sense? AI systems have been popularly marketed as being objective and removing human subjectivity from the equation, but the reality is far more complicated. A report in Science journal found a healthcare company’s algorithm was unintentionally discriminating against black people because their habits for seeing doctors are different from those of white people, which made them appear healthier based on how much money was spent on average to care for them. Likewise, AI systems will always have access to some data points and not include other data point, depending on what data the researchers gathered and the AI was trained on. The healthcare algorithm in the example above used “cost” as a proxy for “health,” but failed to account for other reasons why people might avoid hospitals. While such AI systems may not be malicious, and the creators may have good intentions, the systems will still reflect human biases, both those of the researchers and those of the people about whom the data is collected. This is why “AI bias” is a major issue that both AI ethicists and AI makers are talking about.
Substituting Instead of Supplementing. The key to thinking about AI in ministry is to ask how it can supplement human activities, not substitute for them. As Gartner writes, “The most common mistake with AI is to focus on automation rather than augmentation of human decision making and interactions.” In fact, some research has found that doctors plus AI perform better together, than either one alone. And not just doctors either. Still, even as AI supplements human tasks, it will reshape the habits and activities of humans, as well as the relationships between individuals.
For example, in using AI systems for recruiting or hiring, orgs that rely too much on AI analysis may overlook details that the AI has not accounted for, like humility or creativity or flexibility. If there’s no data about it, the AI system cannot account for it. Builders should pay close attention to these shifts and biases, keeping in mind the values they and their organization hold.
Outward, Trackable Behavior Only. The nature of AI systems, as they stand, is to focus on behaviors and externalities. AI systems are focused exclusively on what is observable and quantifiable in terms of data sets. While it might find unexpected patterns and links between various data points—and those insights can be meaningful—missionaries are likely aware of still more implicit dynamics that occur between people, many of which have yet to be articulated explicitly, let alone measured and quantified. Valuable questions to ask will be, What is still missing from the picture? What data does the AI system not have? And therefore, What is it unable to account for?
Re-biasing. One expert we talked to stated flatly, you can’t take the bias out of the AI system; you can only re-bias it in another direction. Because AI systems derive from historic data, they are liable to reproduce and amplify existing human biases latent in that data. The decisions made by AI systems are either bounded by constraints that coders and designers determine or by patterns derived from historical records of human decisions, both of which are subject to bias. In other words, AI-driven decision-making is fully reliant on human decision-making, not separate from it.
At the same time, AI systems act like a mirror, so they might also illuminate social biases that were previously invisible, ignored, or covered up. Thus, these systems even offer potential ways to highlight bias so that we can work to reduce it. While increasing awareness about AI bias is a positive step, efforts to fix it will not automatically eradicate it.
To mitigate the risk of bias, builders should develop AI systems by drawing diverse groups of people into its development. Without diverse perspectives, builders will develop systems with blind spots that unconsciously bias their product. Additionally, builders will need historic data sets that represent the populations they seek to serve. If data fails to reflect the population, AI systems will in turn magnify (and privilege) those who are represented and further marginalize those who are not. For example, an AI system that only learns from photos of striped shirts will never be able to identify a polka dot shirt. AI systems need participation from everywhere, not just “the West to the rest.”
DeepMind identified 3 criteria of safety that AI systems should fulfill before being deployed:
AI Systems Require Big Data. AI systems typically require large amounts of data to work (and work well). And people are the ones who generate most of that data. Some of that data may be especially personal, including face recognition data and emotion tracking data, among others.
Dignity with Data. Builders must discern how their “love of neighbor” informs their management of data. Some data deserve more secure protection than other data, and the honor and dignity that builders give to humans should in turn be reflected in the honor and dignity they give to people’s data. Builders must decide how various kinds of data relate to the humans who generate it. Some data, like biometrics, are closely tied to a person’s identity; other data like behavioral data might be a bit less identity-based; and still other data might have little to do with the person who created it. Some data deserve more secure protection than other data, and builders must discern how their “love of neighbor” informs their management of data.
Privacy Postures. Contextual privacy is the idea that humans disclose different kinds of personal information depending on the circumstances. You will tell your family personal information that you wouldn’t share with your employer. Context matters. However, digital contexts obscure where data flows and who might see it. Nonetheless, contextual expectations should inform digital privacy policies.
Recent privacy legislation:
Privacy Strategies. Differential privacy and homomorphic encryption are two methods implemented to maintain privacy while using AI systems.
Surveillance. Because AI makes it easier to manage large data sets, larger data sets are being created for uses such as citizen and consumer surveillance by governments and corporations. These surveillance practices will almost certainly impact organizations working in restricted countries. That said, according to the AI Global Surveillance Index (pdf), 51% of advanced democracies deploy AI for surveillance, compared to 37% of closed autocratic states (pdf).
Imago Dei. The imago dei is a key category Christians can use for evaluating AI systems. Multiple views exist that suggest what the image of God is for humans. One view suggests that the imago dei is what we have, such as intelligence, consciousness, morality, and so on. Another view suggests that it is what we do, functioning as God’s representatives and exercising dominion or stewardship in the world. Still another view frames the imago dei as grounded in our relational nature, reflecting the Trinity’s own nature. Within this relational frame, the imago dei is preserved as something unique to humans and not shared by AI systems.
Whatever the case, one can see that as technology has developed historically, so have the theological perspectives on imago dei. New theological frames have developed as new technologies have challenged how humanity thinks about its own identity. In this way, theology and technology have been in dialog, and technological development has continued to help theologians more clearly articulate how humanity images God. Much more theological engagement with the concept of AI is necessary, and Genesis 1-3 can be a guiding light.
Stewardship. The Bible is certainly no foreigner to classification and decision-making, which are the fundamental skills of AI systems. In Genesis, one of the first tasks God gives to Adam is to name the animals. Today, this act of identifying and naming is still a fundamental role for human beings, and the creation of AI is yet another example of humanity’s efforts to fulfill that task. At the same time, as humans delegate this work to AI systems, it will alter how humans fulfill their responsibility in this God-given role.
Judicial Process. Moses famously adjudicated the Covenant Law for the people of Israel until he no longer had the capacity to do it all (Exodus 18). Then he distributed the work among “capable, honest men who fear God and hate bribes” (v. 21). Moses institutionalized the decision-making work of judging disputes within the system of Law. Like today, Moses had a flood of information that he had to digest, weigh, and decide on—more than he could handle. Like him, we are drowning in data, more than we can handle, and we look to AI systems to help us to process it, hoping to make sense of it and determine what to do with it. Given these similarities, it’s no wonder that today’s criminal justice systems are also seeking algorithmic solutions to make sense of it all.
Civil Systems. Like Moses, the 12 disciples in Acts 6 established a system for food distribution by selecting “seven men who are well respected and are full of the Spirit and wisdom” (Act 6:3). Part of the problem they faced were accusations of systemic discrimination (v. 2). With this emerging need, the disciples developed a programmatic solution that was eventually institutionalized in the 7 men they chose as administrators and decision-makers. Their issues, and solutions, sound surprisingly similar to our own today.
AI Bias. Today, many are raising legitimate concerns about “AI bias.” This reality will continue to be a liability in AI systems, just as it was in the food distribution program of the early church. This reality is likely a tension to manage, and not a tension to resolve. Anywhere decision-making is required, bias has the potential to creep in and issues of injustice will arise. AI systems will be no different. They will primarily shift where that bias happens, how obvious or obscure it is, and how it might be managed. Pursuing justice will continue to require public awareness, debate, and negotiation over what values are guiding an AI system, and not subject to it.
The Robot Will See You Now: Artificial Intelligence and the Christian Faith. This book was published in the UK in 2021.
AI systems should enhance human relationships and responsibilities, not replace them. When developing AI systems for ministry use, builders must consider theological and ethical implications, ensuring systems uphold human dignity, align with biblical values, and support human flourishing.
AI systems must be designed to embody human values like fairness, transparency, and accountability. Developers should safeguard privacy, secure data, and establish ethical guardrails—all with the goal that AI systems might support God’s purposes for humanity. Ultimately, Christian builders should prioritize systems that foster love, generosity, and gratitude, aligning AI’s impact with Jesus' call to sacrificial love and the broader goals of Christian discipleship.
For Christians, all humans are set apart with intrinsic and equal value, moral agency, and creativity. While AI might imitate some human qualities, it cannot replicate them. This human identity and dignity is grounded in the image of God (Genesis 1:26-27). While AI can assist in various tasks, it lacks the capacity for true relationships (Gen 2:18) or decision-making (Gen 2:19), although it may wrongly stand in for both.
As Christians, we believe humans must prioritize human connection over AI relationships, ensuring AI serves to enhance human flourishing, uphold dignity, and align with God’s purposes for humanity. Stewardship of AI includes recognizing biases, validating accuracy, and deploying technology ethically. AI should not take over or subvert humanity’s God-given responsibilities and identity.
Artificial intelligence is a product of human creativity and can be celebrated as such. It is also subject to humanity’s God-given authority to guide and steward it. While AI is not subject to any one person’s or one group’s control, neither is it an independent agent separate from human oversight.
As a human creation, AI is designed for a purpose, and therefore is not neutral in any sense. The purposes to which it is put and the methods by which it seeks to achieve those purposes are both inherently biased. These biases will make it good for some purposes. In other cases, its biases risk deforming the users and the outputs, regardless of the purpose.
Honest weights and measures delight the Lord (e.g., Lev 19:36; Prov 11:1; Ezek 45:10). By contrast, “a [dishonest] scale can cheat an entire city into poverty” (Dyer, 125) and dishonest or biased AI systems can cheat an entire population into various forms of poverty—biblical, financial, judicial, and more. But honest and well-balanced AI systems might support human flourishing.
As Christians, we must create routines to regularly review and re-balance AI systems and ensure they accurately reflect and support the future the Bible envisions for human flourishing—namely love, freedom, and care. This review must account for the experiences of the marginalized, the weak, and God’s good creation.
Christian builders are negligent if they do not consider the risks posed by AI systems. Deuteronomy 22:8 required, “When you build a new house, you must build a railing around the edge of its flat roof. That way you will not be considered guilty of murder if someone falls from the roof.” AI systems are more complex than a rooftop, so their risks will be as well. What does “falling off” look like? Builders must envision these risks and install reasonable guardrails to protect users.
Christian builders, to prevent negligent harm from AI systems they deploy, must install reasonable and adequate guardrails for users.
More broadly, the honest scales and guardrails mentioned above are bare minimums. These Old Testament guidelines are encompassed by New Testament teaching. Jesus’ command goes further to include love for both neighbor and enemy (Lev 19:18; Matt 5:43-44). Not only that—Jesus’ “new command” is to sacrificially “love one another as I have loved you” (John 13:34; 15:12). Jesus’ New Testament call to sacrificial love supersedes the Old Testament command to “love your neighbor as yourself” (Lev 19:18).
Jesus’ own example of cruciform love exemplifies an approach to blessing that Christian builders should aim to embody in their AI systems.
AI systems should be designed in ways that support human flourishing and alleviate human suffering. It should “elevate the dignity of human beings and their capacity to flourish as image bearers in the world.” Christian builders have the opportunity to imagine how AI systems might truly help people flourish—informing human agency, upholding human responsibility, developing cognitively and creatively, advancing human embodiment, promoting emotional and spiritual well-being, supporting relationships and celebration, restoring trust, and benefiting the global majority.
In development, this means looking beyond the near-term outputs of an AI system to also anticipate medium-term outcomes and imagine longer-term effects. In addition to time horizons, this also means developing with an eye to AI's potential to support humanity flourishing across social, economic, and environmental landscapes. Building AI systems without such broad visions falls short of God's goals for humanity.
To explore this question, we developed a matrix of possible ideas. We developed a taxonomy of 7 AI techniques and cross-tabbed it with 14 areas of Christian ministry. Below is an outline of our Gen AI-supplemented process.
To develop the AI taxonomy, two FaithTech experts, with ChatGPT and Claude support, discussed various lists and pared them down to the following.
To develop the areas of Christian ministry, we reviewed the list of Issues from Linking Global Voices. This list we had ChatGPT group into 15 broader categories, and removed 1. They are …
We then had ChatGPT to define the AI techniques, to provide it with a basic starting point for each technique. Then using that information, we had ChatGPT “brainstorm ideas for how these methods/techniques could apply” to each of the ministry categories. We reviewed the results and selected the best concepts. This process was iterated a number of times.
The matrix is the result of this process.
An overriding ethic of data stewardship should guide builders of AI systems (Genesis 1:28). Developers and users are called to act as stewards, ensuring systems have robust security measures (Deut 22:8) and are beneficial for everyone. This means stewarding the data supplied or generated by users, and caring for their data as a way of caring for them. Furthermore, loving one’s neighbor (Matthew 22:39) means protecting others by securing personal and communal data from theft, breach, or misuse, thereby upholding justice and the common good.
Personally Identifiable Information (PII) should never be included in public-facing training data, and almost never be ingested in private, institutional AI systems. With internal systems, builders and executives should exceedingly justify any inclusion of PII.
Pairing data sets may risk invading privacy. AI systems could conceivably collect various data sets that separately do not represent a breach in privacy but together would qualify as surveillance and/or an invasion of privacy. Builders should be mindful of how various data sets might pair together and take measures to guard against such risks.
AI systems should ensure privacy and security to the highest degree.
Technology is intrinsically relational. Therefore, its (1) those responsible (creators and users) must determine (2) who they are accountable to and (3) what they are accountable for.
Humans must bear responsibility for what AI does—including its decision-making—and how it operates—including how it determines its output. Individuals and teams that are empowered to deploy AI must also be accountable for that system—and those accountable must also be empowered.
Following SIL, we affirm that a specific person must be deemed responsible for how an AI system operates, and that a different person must be responsible “for monitoring the effects of the AI usage on the people and processes in the areas where it is used.” Given their responsibility, both should have the authority to pause, restrict, or terminate AI’s operation within their domain of responsibilities.
“No AI system will operate without a designated person responsible for the AI system, generally the AI project lead or their direct supervisor. In addition, a different person(s) should be responsible for monitoring the effects of the AI usage on the people and processes in the areas where it is used. Each of these persons should be empowered to pause, restrict, or terminate AI usage in their area of responsibility, if necessary.”
Builders have a responsibility—commensurate with their control over AI—to account for the various ways that it might harm those persons or groups (Romans 14:12).
God: Accountable parties must take responsibility for how AI systems relate to what is true. This purview includes aspects of human dignity, bias, and other Biblical principles.
Others: Builders—both as individuals and within organizations—must consider how AI mediates their relationships to a host of others. This includes accountability to:
To this end, deployers should maintain a feedback system to understand and address grievances.
Builders must evaluate AI both synchronously (as above) and chronologically. Builders have an accountability to others from the past and those in the future—because AI retrieves historical training data, and any products or AI-informed decisions will have an impact on people in the future.
Self: Builders should reflect on how their involvement with AI influences their self-perception, in their habits, beliefs, desires, and more. With organizations, this “self-reflection” includes observing how AI affects relationships between colleagues and among departments, as well as affects what employees believe about AI and what choices they have based on that understanding.
Creation: Finally, builders must account for AI systems, both as created objects stewarded by humanity, as well as how AI shapes, uses, and consumes creation’s resources.
Transparency — AI builders and organizations are accountable to clearly communicate to employees, partners, and users when AI is being used (Philippians 2:3), and especially when and how their data is collected, stored, and used by AI systems. As Praxis writes, “institutions and the systems they deploy [should] become more transparent, while persons and their individual information become more protected.” This responsibility also means that, following the Rome Call, “in principle, AI systems must be explainable.”
Justifying AI — AI is not suitable in all cases—maybe not in most cases. Therefore, executives should provide clear justification for why AI is the best solution to a given problem, and why a less complex solution would not suffice. Justification should cover many of the areas outlined here, including efficacy, ethics, environmental care, and mission alignment.
Continuous Improvement — Builders should stay up to date on AI technical and ethical standards and should incorporate best practices into AI operations. Builders should regularly review AI systems in light of both technical and ethical standards. They should also seek to confirm that their models consistently produce results that align to current benchmarks—for generative AI, benchmark reviews should include alignment to Christian faith statements.
Mission. AI builders and deployers must ask, “To what end are we seeking to deploy our AI system? What future will such systems create?” In other words, does a given AI system align with the stated goals? We must demonstrate that deploying an AI system will support the mission in both outcomes and process.
Bias. AI bias risks maligning Christian mission. Along with the ERLC, “We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion.” For this reason, builders are accountable to pursue AI systems that consistently represent legally protected classes in fair ways—it should neither over- nor under-represent them, nor should it misrepresent or mislabel such groups. In light of Jesus’ command to love, Christian builders should dream about how they can extend these requirements beyond legal minimums and mere fairness to actual blessing (Roman 12:14).
Reliability. Builders should seek to build AI systems that work reliably and “do not create or act according to bias” (following the Rome Call). Unreliable or inaccurate outputs should be considered harmful. While users should be reminded to responsibly check accuracy, builders should not knowingly push biases or inaccuracies downstream onto users—that’s both unethical and inefficient.
Empowerment and accountability should go hand-in-hand. Individuals and teams that deploy AI systems should also be accountable for them—and those who are accountable must also be the decision-makers empowered to withhold, alter, or deploy AI systems.
Empower Users. Users must also be empowered. To this end, deployers should maintain a feedback system to understand and address grievances.
In AI policy making, those developing ethics guidelines should clearly identify the individuals, departments, and organizations who are empowered and accountable. This clarity will encourage deeper consideration of AI systems.
Before AI deployments, builders and organizations should imagine the far-reaching potential consequences, including abuse, misuse, and unintended consequences of appropriate use. One practice is to do a “pre-mortem” by imagining, “This AI system failed in 1, 5, or 10 years—why did it fail?” With these reflections, builders might see ways they can implement better guardrails, preventative measures, or adequate warnings across the deployment’s ecosystem.
Dependency. AI systems will likely create dependencies. Builders and organizations must determine whether such dependencies pose a risk for them, and to what degree. Without such consideration, they will fail to count the cost.
For any redemptive technology, builders and ministries should be able to say yes to the following questions:
Redemptive Technology refers to systems that—regardless of what “product” they provide for customers—shape the identity of the customer to be more like Christ. These are technologies that, in their use, make trust in Jesus more plausible and therefore more likely. Users of redemptive technology find it easier to trust God, not harder. Redemptive technologies make it easier for users to “live by the Spirit.”
By contrast, redemptive products are those that make life better for consumers, regardless of how the business is run or the product is made. This is the predominant mindset in most businesses today, and they believe that “customer is king” and that “what the customer wants” defines “what makes life better” for the consumer.
Or consider redemptive businesses. They make life better for workers, business partners, and customers, regardless of what kind of product they make. “This type of business makes the world a better place by the way it conducts itself,” writes the Impact Foundation. Whether the company makes shoes or chandeliers, a company that seeks to make the world better for everyone it encounters, that business is focused on being “redemptive.” Praxis Labs advocates for business models that incentivize such results.
Thus, at the heart of redemptive technology should be the formation of character. Whereas “impact metrics” adopt an output-focused mindset, character is the outcome-focused metric at the center of redemptive tech. Focusing on character outcomes is the long play. From there, the shorter-term goals will align.
When tech is making trust in Jesus more plausible and making it easier for people to live by the Spirit, we believe that the near-term outputs will fall into place and align with Christian ministries.
Character outcomes are defined by the concept of “Christlikeness.” Jesus embodies the type of character we aim to develop. Both technology and character share a common baseline: habits. As does the Christian life. In John Mark Comer’s language, we “practice the way.” This practice is a collection of habits, and these habits develop character (Romans 5:4), shaping us into Christlikeness.
Technology is, above all, a habit-forming practice. Regardless of the content or product of a technology, adopting new technology first requires users to adopt certain habits. So what should these habits be?
We suggest that users who adopt redemptive technologies will also adopt two key habits:
These two habits see all of life as a gift (and presume a Giver). There may be other Christlike habits or character qualities that could provide additional metrics, but we propose these two as the most practical, and believe that many other virtues will develop as consequences of these two.
Developing such Christlikeness is of course the work of the Holy Spirit in us, “giving us the will and the desire to do what pleases God” (Phil 2:12-13). Nonetheless, technology can either encourage or oppose the work of the Spirit by cultivating habits that make trust in God more or less plausible.
Thus, we propose four key measurements for evaluating the impact and effectiveness of tech in ministry contexts:
First, builders and ministries are able to affirm the following:
Second, users who adopt redemptive technologies will simultaneous adopt two core habits:
It is up to builders to creatively discern how redemptive technology might align toward these metrics. We hope others might develop ways to further identify and measure such alignment.
Here are two recent examples:
Internal & External Applications. Ministry leaders might think about applying AI in two different ways—internally and/or externally. Internally, AI could be used within the organization in various processes or in research. In this case, AI may supplement (and occasionally displace) existing roles within the organization and change how various people and departments relate to one another. Externally, organizations could use AI to deliver goods or services to their target population. In this case, AI will supplement how the organization fulfills its mandate and will alter how the organization relates to those it serves.
Chatbots. Some organizations are exploring how they might use AI-driven chatbots to engage people who are interested in learning more about Christianity. In many cases, organizations are using social media to elicit interest, and they receive more responses than their staff and volunteers can handle. Indeed, “the harvest is plentiful,” and some see chatbots as a way to scale up the work of the few, so that true seekers can be engaged meaningfully.
Missiology Research. Christian ministries have published significant bodies of missiological research that Natural Language Processing (NLP), a subset of AI, could summarize and deliver to executives seeking to learn and develop new strategies. How might this research be collected and made available to NLP systems, and what outputs would be most beneficial to ministry leaders?
Low-Resource Languages. Many of the languages of UPGs have few texts on which to train an NLP. These limited resources pose a challenge for AI-enabled Scripture translation. This article provides an overview of low-resource machine translation, current solutions and remaining needs. Facebook has also done research and resourcing for low-resource machine translation.
At a recent Missional AI summit focused on NLP, ad hoc teams brainstormed solutions to the low-resource problem. One popular idea involved recording stories from existing low-resource language speakers as a way of gathering large bodies of text for use in training NLP systems.
Ministry organizations can start by asking, “What are our current systems of decision-making?” To this end, organizations might first consider aspects of their work where decision-making is fairly standardized and routine. In such areas, machine learning might be applied fairly directly.
Hiring Decisions. Areas could include application intake forms (with awareness that application bias has occurred in orgs like Amazon). Orgs could draw on historic data of applicants and which kinds of people were selected and rejected. With this data, they could surface which data to look for and which to avoid. This application can also surface the organization’s own values and biases. Those biases need not necessarily be seen as negative. Instead, AI systems may serve as mirrors for the organization, reflecting historically ideal candidates, so that an organization can seek out more candidates like them. Conversely, the AI system’s results may provide the organization with surprising insights about their selection process and criteria, including unconscious biases that have historically shaped selection. Then, the org could decide whether to adjust selection procedures to draw in candidates from other sources.
Choosing Prospective Locations. Another decision point may include determining where to send new missionaries, what Indigenous groups to target, or what kinds of individuals are most open to hearing the gospel. AI systems may be able to find patterns within existing data that are currently invisible to the organization. They could do this by gathering assorted demographic data about target people groups and analyzing it in light of data evaluating the organization’s success within the target context. In doing so, orgs could find regions with similar demographic markers. Additionally, they could then look for applicants whose profiles are similar to the applicants who proved effective in similar contexts.
Historical Writings. Similarly, the global church has centuries of writings from faithful saints who have led the church. How might their writings be collected and made available to NLP systems in ways that give churchgoers greater access to them, and answer questions that Christians have asked and addressed again and again throughout history?
Recommender Systems. For Christians and seekers who want to grow in their faith, orgs might deliver relevant content to them using a recommender system. For AI-enabled Bible translation, recommenders suggest candidate words or synonyms for users to apply to a translation. For evangelistic follow-up (perhaps based on a chatbot conversation), recommenders could suggest which seekers are most likely to respond, which ones may have suspicious motives, and what kind of content might serve the next step. Often recommender systems rely on explicit ratings and feedback from earlier users, so these systems may be better suited for some contexts more than others.
Education. Some are imagining that AI systems might be used to tailor education plans to individual learners, as well as evaluate their performance in various subjects.
Staff Education. Many ministry orgs are populated with highly relational people, which is their great strength. Some, though, may perceive AI-systems as opposed to relationship-building. It’s important to acknowledge this concern, and the real risks involved.
AI Needs Data. AI systems often require large amounts of data to begin with. Whereas third graders will learn to cross the street holding a parent’s hand and using “stop, look, and listen,” AI systems will require 10,000 or more data-rich videos (and significant energy consumption) to get in the vicinity of achieving a third-grade proficiency. If an org doesn’t yet have this kind of data, it might consider ways to start gathering data points about individuals and activities within the organization and its networks. Exceptions to these data-hungry needs include modeling personalization, user engagement, and chatbots.
Which Data. Deciding what data is important and unimportant will require careful attention. Discerning what data to collect and what data to act on, and the difference between the two, requires prayer, discernment, and the leading of the Holy Spirit.
AI Needs Programmers. In addition to data, orgs will need programmers capable of creating AI-systems that reflect the org’s strategies. Partnerships with specialists can be a key strategy in the early stages. Orgs with limited amounts of data may want to first consider hiring programmers familiar with algorithms as a way to learn more.
Organizational Decisions vs AI Decisions. The way that AI systems go about decision-making will be significantly different from current organizational strategies. While AI systems can be highly complex, their statistical precision will be unlike the hundreds of individual decisions individuals make across a company. Whereas in human institutions dozens or hundreds of individuals apply their own complex reasoning, logic, experiences, sentiments, affections, beliefs, values, and instincts to their particular decisions, an AI system is subject to the boundaries of its parent algorithm and the data available to it.
This contrast is not to say a given AI system doesn’t make hundreds, thousands, or millions of decisions. It’s only to say, it won’t use the same interplaying textures of reason, affection, and the rest that humans do. These methods are fundamentally different in many ways. Some will see these differences as an advantage—others as a liability. The reality is the approaches are simply quite different, and so their value depends in part on the contexts where they’re applied.
As a result, the recommendations of an AI may also be a bit counter-intuitive. Putting trust in those recommendations will take time to prove out.
Data Scarcity. Unless orgs have already been collecting significant amounts of data, some AI systems will provide limited value to the organization. With a few exceptions, many AI systems must churn through magnitudes more data than the average human needs to choose a new ministry location, recommend a valuable article, or hire a new employee. Finding patterns in vast troves of data is both AI’s great value and its liability. Unfortunately smaller orgs that don’t have “big data” may benefit less. (Some exceptions could include modeling personalization, user engagement, and chatbots.)
One opportunity here is for multiple ministries to compile their data together so that they might learn from one another’s data and make better decisions. If a mission data collection agency could be set up where data is collected, shared, analyzed, and interpreted by AI systems, orgs could optimize resources in the field and distribution to the people and places that most need their services.
Nonetheless, orgs may need to be continually generating more data for AI to be useful to them in an ongoing way. This need brings issues of surveillance and privacy into the mix. “As Christians think about the morality of AI, we need to reflect on the surveillance that allows machines to learn” (source). And with large amounts of data, organizations will need commensurate security to protect their data sets.
Scale and Cost Savings. The value of AI systems may come only at scale, and incremental savings may be hard to show. Thus champions for AI systems may struggle to demonstrate their value to stakeholders within an organization. Because AI systems aggregate entire procedures of decision-making, advocates may find themselves trying to describe the organization’s entire decision-making environment, which might feel a bit like describing water to a fish. In this case, the words “Artificial Intelligence” may win over some skeptics, as long as the dystopian movie fallacies don’t overpower their perceptions.
Perceptions and Expectations. Current perceptions of AI may challenge people’s faith more than the actual technology itself. Hopes and fears about an “artificial superintelligence” far outstrip what current AI systems are capable of. Experts agree, the depictions we see in movies are decades away at best, if not altogether unlikely or even impossible. Nonetheless, popular opinion believes that advances like these could bring about “robot overlords” and replace humanity in some significant way. Hopefully, this report can help right-size people’s expectations, hopes, and fears.
AI Winning Jeopardy! IBM’s famous Watson beat reigning human Jeopardy! champions. However, a team of scientists working for 4 years helped Watson learn from 10,000 previous Jeopardy! questions using the equivalent of 6k-10k desktop computers. David Ferucci, the IBM scientist who directed the Watson project, pointed out, “Humans do all this with a brain that fits in a shoebox and is powered by a tuna-fish sandwich and a glass of water” (source). And while the other contestants walked off stage and made their way home after the match, IBM’s Watson AI system depended on others to unplug it, pack it up, and haul it away because it has none of those other capabilities. AI systems today are highly specialized and trained for specific tasks. Humans will do well to recognize what these systems can’t do just as much as what they can.
Encourage Deeper Research. For individuals who still find their faith challenged by the prospect of AI systems, one way to support them may be to encourage them to dig deeper into AI research and learn what its true capabilities are. Use their findings as a springboard to discuss their concerns and questions. AI systems certainly present real questions about what it means to be human, so a biblical theology of the image of God (imago Dei) may help orient the conversation for further exploration.
Compassion International is now registering children digitally using mobile devices. They’ve partnered with Microsoft AI to leverage that data to connect supporters with the needs across Compassion’s 7,500 locations globally. They’re also using their data to better understand what succeeds in fighting poverty.
Evangelism. In October 2017, CV Global in Australia reported using an AI chatbot for evangelism. You can experience it here. In conjunction with human evangelists, JesusBot dialogs with users and helps them learn more about Christianity, with the aim of moving them toward committing their life to Jesus. This podcast discusses some of the theological and missiological implications.
Bible Translation. SIL, Wycliffe, and other Bible translation organizations are leveraging AI to do “machine translation” for low-resource languages of small language groups. AI-based translation suggestions offer a viable starting point for these new translations. For example, “SIL’s linguists use AI to learn languages more quickly and complete Bible translations at a faster pace.” AI systems are also providing quality assessments for these translations.
Assistive Technologies. Captioning, real-time translation, and other services are available for churches and videos. For example, check out spf.io.
News and Scripture. Stanway is using semantic search technology to evaluate the topics within a news article and pair them with sermons that pastors are preaching related to those topics so that people might be able to read a news story and then find Biblical content related to that issue. Context browser extension is doing something similar.
Healthcare. AI systems are supplementing doctors by improving their accuracy and efficiency at detecting certain types of cancer. Researchers are also looking to use NLP to identify early signs of diseases like Parkinson’s, Alzheimer’s, and others, which include predictable changes in speech patterns.
Runway. Gartner has found most AI projects in business take 4 years to launch, despite shorter predictions. Organizations may benefit from starting with a small pilot project. A “Minimum Viable Product,” in tech parlance, can allocate minimal resources to the effort while maximizing learning across the entire process. Applying AI to a narrow segment of the organization may prove the fastest and most productive. The organization’s financial segment may be one good area to start with, because of its complexity, its key role, and its intrinsic use and ongoing production of numeric data.
Big Data. As outlined earlier, organizations can explore the possibilities by considering what “big data” they already have and which is driving organizational goals and decisions. This kind of data probably reflects much of the organization’s primary focus because “what gets measured is what gets done.” A large amount of data like this will be one of the easiest places for an existing organization to start.
Smaller Data? For smaller organizations without a lot of data, some AI systems which require less data include modeling personalization, user engagement, and chatbots. These systems may allow smaller organizations to begin exploring and learning with AI.
Technologists and Tech Stacks. Beyond the timeline and data that are needed to launch AI systems, orgs will of course need AI experts (or enthusiasts) willing to spearhead such a project, Beyond that, O’Reilly has looked at what technical capabilities—the “intelligence stack”—are required for implementing AI within an org.