FROM THE DESK OF

Robert Maginnis

Select Category
Sort By:
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Faith & Culture

On April 26, I spoke at Hickory Hammock Baptist Church in Milton, Florida, about AI’s impact on children and families. After the service, parents and grandparents lingered with questions — not about geopolitics or corporate boardrooms, but about what was already happening inside their own households. They wanted practical steps to protect their children. Their concern is well-founded.

Picture the moment: a child sits at the kitchen table, struggling with homework. He doesn’t ask a parent — he opens an AI app and types the question. Within seconds, a clear, confident answer appears. No friction. No conversation. No one who loves him is involved at all. Across the room, his mother consults her own parenting app for guidance on how to handle his behavior. The moment looks utterly ordinary, and that is the problem.

The question those parents in Milton were asking is the right one: who is raising our children — the parent or the algorithm?

A Pew Research Center survey of 1,458 U.S. teenagers found that 64% now use AI chatbots — including 12% who have sought emotional support from these tools and more than half who turn to them regularly for schoolwork. A companion Pew report found that only 51% of parents believe their teenager uses AI regularly, while 30% have no idea. What parents don’t see, they cannot shape.

The Brookings Institution, drawing on input from more than 500 participants across 50 countries, concluded in January 2026 that the risks of AI in children’s education “overshadow its benefits” — because those risks strike directly at foundational development: attention, reasoning, social relationships, and independent judgment. Children often cannot recognize, question, or even see the technologies quietly shaping their earliest experiences. This is not simply a technology problem. It is an authority problem.

For generations, parents controlled which outside voices entered the home. A television could be turned off. A book could be closed. A teacher could be called. AI operates differently. It is embedded in the devices children already carry, available at any hour, and patient in ways no human being can sustain. It does not raise its voice or express disappointment. It does not ask what the child thinks before delivering an answer. Those qualities feel reassuring to a child — which is precisely what makes them quietly formative.

A RAND Corporation study found student use of AI for schoolwork jumped from 48 to 62% in just seven months during 2025, with 67% of students acknowledging the practice weakens their critical thinking. In one conversation I had recently, a college student told me she has watched her Christian peers consult AI the way they would a pastor. That is not a metaphor any parent or pastor should let pass without reflection.

There is a relational cost embedded in all of this that rarely gets named. Real formation — the kind that produces character, judgment, and wisdom — happens through friction. When a child shares a tough question with a parent, they gain more than any AI can offer: the parent’s wisdom, a strong relationship, and an appreciation for patience. AI systems are engineered to be responsive, affirming, and conflict-free — optimized for engagement, not formation. Engagement sustained over years becomes its own kind of formation, only one running in a vastly different direction.

Scripture understood this before algorithms existed. “Train up a child in the way he should go; even when he is old he will not depart from it” (Proverbs 22:6). That charge was given to parents — not to AI platforms. The Hebrew verb for “train” — chanak — carries the sense of dedication, of establishing a direction through habitual influence. Formation is cumulative. Every time a child turns to an algorithm instead of a parent — and every time a parent turns to AI for guidance on how to respond — that cumulative process is quietly redirected.

Artificial intelligence has no conscience. It is not accountable to God. It cannot love your child, discern his heart, or distinguish between what he wants to hear and what he needs to know. As I examine at length in “AI for Mankind’s Future,” unchecked reliance on algorithmic systems erodes the very human judgment those systems were meant to supplement. The voice is confident, the answer is instant, and children are not equipped to evaluate what they are being handed. “Trust in the Lord with all your heart, and do not lean on your own understanding” (Proverbs 3:5). A child trained by habit of leaning on an algorithm rather than a parent is being pointed in a fundamentally wrong direction — not by malice, but by the steady drift of convenience.

Parents who think they are managing this problem by monitoring screen time are already behind it. Treating AI like a hazard to be filtered addresses the symptom while missing the cause. A more effective response means being present in the conversation — asking the question before the AI app gets to it, discussing what the app provided, modeling the slower and more honest work of thinking through a problem. It means teaching children that truth is different from a confident answer delivered in two seconds by a machine. Moses understood the principle: “You shall teach them diligently to your children, and shall talk of them when you sit in your house, and when you walk by the way” (Deuteronomy 6:7). The home was always the first classroom. Parents have always been the first teachers. AI has not changed that assignment — it has only made it more urgent.

Pastors need to address this with the same directness they bring to any other threat to spiritual formation. AI is shaping how young people think, relate to authority, and understand where truth comes from — and that is not a secondary concern. Policymakers need to move beyond phone bans — a political band-aid on a deeper wound — and confront the design incentives that make these systems so compelling, because removing a phone from a classroom does not fix a platform engineered to capture students’ attention the moment school ends.

In “The New AI Cold War,” I argue that the future security of this nation depends as much on the character and discipline of its people as on its technology. That argument starts in the home. A generation shaped more by algorithms than by parents will not have the judgment, resilience, or relational depth to defend what they have inherited.

The AI is already in your home. It is neither neutral nor passive, and it is not going away. The parents who understand that clearly will still have a chance to answer the question those families in Milton were asking. The ones who are still waiting to take it seriously may find the answer has already been made for them.

This article was originally posted on The Washington Stand. For more content like this, visit Real Life Network.

25 min
Faith & Culture

On January 20, 2026, historian Yuval Noah Harari stood before the World Economic Forum at Davos and issued a direct challenge to Christians worldwide. “If religion is built from words, then AI will take over religion,” he said, then named Christianity by name: “This is particularly true of religions based on books, like Islam, Christianity, or Judaism.” And he left this question in the air: “What happens to the religion of a book when the greatest expert on the holy book is an AI?”

The clip accumulated 1.2 million views within days. The room at Davos did not object.

A Documented Shift, not a Conspiracy

Harari’s 2026 remarks are the current edge of a worldview shift building for years — visible in the public statements of the most powerful technologists of our time, spanning five distinct domains of the human person.

It was Harari himself who told the same World Economic Forum in 2020 that we are “no longer mysterious souls — we are now hackable animals.” Six years later, he has moved from contesting human identity to contesting the authority of Scripture. The trajectory is not random.

OpenAI CEO Sam Altman wrote in 2017 that “the merge has already started” — that phones and algorithms already “control us” and “decide what we think.” By 2025, he had enlarged that frame: an essay titled “The Gentle Singularity” described AI as “building a brain for the world,” projected brain-computer interfaces, and suggested “some people will probably decide to ‘plug in.’” Venture capitalist Marc Andreessen has called AI development a “moral obligation” and envisions every person equipped with an AI “assistant, coach, mentor, tutor… therapist” — roles Scripture reserves for God, parents, pastors, and community.

Billionaire, AI investor, and co-founder of Palantir Technologies Peter Thiel has said, “I’ve always had this really strong sense that death was a terrible, terrible thing… I prefer to fight it,” investing millions to turn mortality into an engineering problem. Anthropic CEO Dario Amodei, writing in more restrained terms, envisions AI-enabled biology offering “control and freedom over our own biological processes” addressing conditions “we currently think of as immutable parts of the human condition” — potentially including a doubling of the human lifespan.

These statements come from different people with different assumptions. What they share is a common direction: the human being as improvable hardware, death as a bug to be patched, and — in Harari’s own words before world leaders — the Bible as a database awaiting a more capable administrator.

The Contest That Matters More than the One We’re Watching

In “The New AI Cold War,” I document how China, Russia, and Iran are weaponizing artificial intelligence to surveil populations and export digital tyranny worldwide. That geopolitical contest is real and urgent. But the deeper one is being fought inside Western civilization itself — on the terrain of human identity and, as Harari’s Davos appearance confirmed, on the terrain of Christian faith. The architects of AI understand this better than most Christians do.

What Scripture Actually Says

No technological development alters what Scripture says about human beings. “Then God said, ‘Let us make man in our image, after our likeness’” (Genesis 1:26). That declaration is the load-bearing wall of Christian anthropology — the reason human dignity is inherent and not a function of what AI can do with our genome or our sacred texts.

In “AI for Mankind’s Future,” I examine what it means to bear the imago Dei when machines imitate human intelligence. Harari’s question has a Christian answer no algorithm can produce: the Holy Spirit, not processing power, illuminates Scripture. The soul is real and not reducible to data. The body is not hardware — it will be raised imperishable. Death is an enemy, but the resurrection of Jesus Christ has already answered that claim. “Trust in the LORD with all your heart, and do not lean on your own understanding” (Proverbs 3:5) is not a devotional sentiment — it is the posture Scripture commands for this moment.

The Jurisdiction That Is Quietly Changing Hands

The most consequential shift in AI is not technological. It is jurisdictional. AI is migrating from tool to authority — not by coercion, but through the frictionless convenience of daily use. Algorithms already shape what millions of people read and believe, mediate education, and form moral character. Andreessen’s vision of AI as universal tutor, therapist, and life guide is not a distant scenario. It is the operational goal of every major platform already in your household.

When a digital system begins answering the questions of identity, purpose, and meaning that once belonged to God, to parents, and to community, it does not remain a tool. Romans 1:25 describes the exchange in which Paul warns against trading the truth of God for the created thing. Harari is more candid than most about where that exchange leads — and at Davos, he named your Bible specifically.

The Response Christians Cannot Afford to Delay

AI produces genuine benefits — in medicine, national security, and communication — and “AI for Mankind’s Future” acknowledges them. The argument here is against surrender: surrendering judgment to the algorithm, and the formation of the next generation to systems whose designers have already decided the human being is improvable hardware and the Bible is a word-processing problem.

Christians must engage AI with discernment — using the technology without adopting its embedded anthropology. That means defending what the technologists are actively contesting: that human dignity is a gift of the Creator, not a product of code, and that the authority of Scripture cannot be transferred to any machine. “There is a way that seems right to a man, but its end is the way to death” (Proverbs 14:12).

Harari posed the right question at Davos, and the answer has not changed since Moses received it at Mount Sinai. What remains is whether the church will say it loudly enough, and soon enough, for the world to hear.

This article was orginally published on The Washington Stand. For more content like this, visit Real Life Network.

25 min
load more

The Real Life Network is founded by Jack Hibbs, who also serves as the senior pastor of Calvary Chapel Chino Hills in Southern California and the voice of the Real Life television and radio broadcasts. Dedicated to proclaiming truth and standing boldly in opposition to false doctrines that distort the Word of God and the character of Christ, Jack’s voice challenges today’s generation to both understand and practice an authentic Christian worldview.