The basics:
- NJBIZ panel explores AI adoption, regulation, workplace challenges across sectors
- Legal and ethical concerns drive caution even among tech-savvy users
- Companies and educators focus on frameworks, guardrails & training to safely integrate AI
- Experts note overuse and resistance among employees and students, emphasizing need for balanced AI policies
During a wide-ranging NJBIZ panel, experts from business, law, education and IT examined why artificial intelligence feels fundamentally different from past tech shifts — and why companies struggle to balance rapid adoption with risk, regulation and restraint.
The 90-minute Jan. 29 conversation, moderated by NJBIZ Chief Editor Jeffrey Kanige, featured:
- Michael Abboud, CEO, TetherView
- Nicholas Duston, member, Norris McLaughlin PA
- Jason Gulya, professor of English and Communications, Berkeley College
- Doug Nesler, director of AI, Vertilocity
Checking in
“I wanted to start by getting a reality check for where we stand, where businesses and users stand with artificial intelligence,” said Kanige at the open. “I ask because it seems to me that, although I think we’ve all witnessed a lot of technological developments over the over the years in our careers. It strikes me, though, that this one seems different.
“It seems like there’s a heightened level of skepticism, fear about AI and what it would do, and that sort of happens with any kind of technology,” Kanige continued. “I’m wondering, first of all, whether you’re encountering any of that in your dealings with colleagues, with clients, with people that you deal with in the course of your daily work? Are you seeing the same sorts of things that we see in the media and just in general conversations about the concerns people have about AI?”
Nesler led off, “Oh, sure. I’m definitely seeing those concerns and the disbelief of what is and is not possible today. You hear a lot of hallucinations and people thinking that’s a limitation of the technology, rather than embracing the fact that we haven’t had the time to catch up to build the frameworks and the tool set around the AI to formulate the context. Or ask the questions in the correct format to get it to apply the correct answers.
“And I think a lot of the stories you hear of things that went wrong are very correctable in the very short term. And people are underestimating how fast that’s going to occur.”
I think a lot of the stories you hear of things that went wrong [with AI] are very correctable in the very short term.
– Doug Nesler, director of AI, Vertilocity
Learning curve
“OK, Jason, what about you? I’m curious, especially in an academic setting, whether you’re picking up the same sorts of things, or are students that you deal with used to this kind of stuff by now?” Kanige asked.
“One of the things that I think is so interesting about teaching, especially in higher ed space right now. These are students who are, a lot of them on the cusp of graduating, joining the workforce, joining companies and everything else,” said Gulya. “I see a lot – it’s a huge combination of students. As a faculty member, you might walk into a group of 18-to-22-year-old college students, and some of them might be using AI for everything all the time. It manages basically their life, and they’re so used to asking it questions.
“I will also have students who actively resist it, who say, I don’t want to touch it. Sometimes it’s fear, sometimes it’s that they’re worried about the environmental impact. And so, the big struggle here is trying to figure out how to balance that. Because I see for a lot of students – and this is for students as well as faculty members – you see either pole. You have some people who are scared of it, don’t want to touch it. That’s certainly there,” he continued. “But you also have, especially from the student side, people who trust it too much.”
Replay: Artificial Intelligence Panel Discussion
Click through to register to watch the full panel discussion!
Moving toward the center
Gulya said that those students just treat ChatGPT as a search engine, basically.
“And there’s actually a big mindset shift when you walk through how they actually have to check it for accuracy,” said Gulya. “For some students, that’s obvious. Others, not so much. And so, I think the struggle is trying to figure out how to reach all of those sets of students, especially if you have 20 students, they might all have all different approaches toward it.
“So, I see both poles and, increasingly, what I’m seeing as I teach to my students is the vast majority of them start to move toward the center.”
‘Things that have already gone wrong’
“Nicholas, I’m guessing that you’ve heard a lot of concerns. Because I think when folks think about what might come from using this, the first thing they think is possibly legal liability for what we’re doing here,” said Kanige. “What’s been your experience in terms of how people are responding to what’s becoming more widespread adoption of AI?”
“Yeah, that’s right. My view, in particular, of the world is things that have already gone wrong,” said Duston. “And of course, other people at the firm get these questions about how to prevent things from going wrong all the time.
“One thing to your question about skepticism that I find interesting is that, unlike other technology – where people who know about the tech are the ones going, this thing is awesome, let’s use it. I find that people who understand how AI works are the ones saying, ‘Let’s pump the brakes here, because that’s not what it’s for.’
I find that people who understand how AI works are the ones saying, ‘Let’s pump the brakes here, because that’s not what it’s for.’
– Nicholas Duston, member, Norris McLaughlin PA
“And it’s the people who just see how amazing it is without understanding what’s under the hood that are like, ‘I’m going to use it for everything,’” he continued. “I’m going to let it write a brief, and I’ll submit that to the court and get sanctioned. I have partners who were software engineers before they became attorneys, and they’re the ones helping me tell everybody, slow down. To Jason’s point, it’s not a search engine. It’s not telling you facts. It’s guessing and possibly lying to you.
“And it’s been really fascinating to me that it’s the people who are usually the tech proponents, are the ones saying – that’s not what it’s for. Let’s slow down.”
One size does not fit all
“And Michael, I think that you probably run into the same sorts of things. Do folks have an appreciation, really, for what AI is?” Kanige asked. “And do the folks with that appreciation – are they the ones who want to slow things down? What’s been your experience?”
“We’re in a really unique position, because we’re an IT service company where we host infrastructure, we manage the Microsoft environment for clients, and we basically have two types of clients,” said Abboud. “Clients that are over-adopting it, without thinking of the guardrails, without thinking of a framework, without understanding of the impact it could be and the risk.
“And then we have clients that just outright, kind of just say, ‘No, we’re not allowing it.’ And unfortunately, there’s very few clients that we have that are in the middle.”

Internal threats
Abboud explained that like cybersecurity, you have a lot of people, businesses, that have this mindset of – it’s a set it and forget it concept.
“So, because we see small businesses and we see large businesses and we see middle businesses, you have the crazy entrepreneur. I might be classified as one of those, depending on the circle that I’m in,” Abboud continued. “And they say, let’s just roll it out and put it out there; and to everyone’s point earlier, you have to understand this is not the be all holy grail yet. And then we have clients that just outright deny it. So, the first thing that we ask clients to do is, ‘Hey, what’s the risk? What’s the internal risk?’
“Because the biggest risk for AI is not the external threat, right now. We’re trying to make people aware of the internal threat. It’s the old adage of locks keep honest people honest. And if you don’t put classification around your data, if you don’t have a framework, or if you don’t even tell people, ‘Hey, you shouldn’t use AI to use a brief,’ then you really need to start with policy. Then you need to start with classification. And then kind of scale it up from there.”
The discussion continued around a number of critical and pertinent issues about AI adoption, usage, regulations, liabilities, security and much more.
You can reach more about this informative panel discussion in the Feb. 2 issue of NJBIZ.
The post NJ experts debate AI adoption, risks and business impact appeared first on NJBIZ.

