Rajat Chowdhary, Technology Consultant Partner at PwC Middle East, speaks to Economy Middle East about how traditional command centers are evolving to tackle new-age threats. He also stresses on the importance of making people feel secure even as AI-driven processes deliver tighter surveillance.
Chowdhary also puts special emphasis on how AI and social media and work together to better inform people.
Can you give us some specific examples of how an evolved command center will keep people safe? What will it really do on the ground?
As technology rapidly advances, we have an ever-evolving landscape of threats and crimes. The traditional command centers (CCs) that form the backbone of public safety are undergoing a transformative journey propelled by technological advancements, particularly with emerging technologies such as artificial intelligence/machine learning (AI/ML), predictive analytics, smart resource allocation, holographic displays, IoT and edge-computing, and others.
Therefore, command centers are moving towards being cognitive with automated and personalized intelligence for efficient on-ground operations. Below are some of the real-life examples where cognitive command centers play a key role in safeguarding public safety.
Smart City Traffic Management
Predictive Analytics: By analyzing historical traffic patterns, the system can predict potential congestion or accidents and alert relevant authorities.
Public Safety During Events
Social Media Monitoring: Cognitive command centers analyze social media for real-time updates and potential threats during large public events.
Natural Disaster Response
Early Warning Systems: Using environmental sensors, cognitive command centers can detect early signs of natural disasters such as earthquakes, floods, or wildfires.
Health Crisis Management
Disease Surveillance: Using healthcare data and information from medical facilities, cognitive command centers can monitor the spread of diseases and identify potential outbreaks.
Facility Security
Integrated Security Systems: Cognitive command centers integrate data from surveillance cameras, access control systems, and video analytics to generate proactive insights.
Read | Transformation offices play key role in adapting to changing needs, says PwC official
Do you think an AI-backed command center will be better at predicting and helping control situations such as a food crisis, or a pandemic?
Imagine a cognitive command center that integrates advanced technologies and quickly processes large amounts of historical and real-time data. The aim of the command center is to “predict, pre-empt and prescribe”, highlighting potential vulnerabilities and incidents well in advance to better monitor, plan and manage the evolving situation. If, however, the situation escalates the center shall be well equipped with the appropriate skill sets, standard operating procedures, interaction mechanisms. Lastly, it will have the tools and technologies to “prepare, respond, recover and review” to adopt learnings to improve future incident response. The command center shall be proactive, and at the same time be reactive, to keep people safe and secure.
These new-age command centers, facilitated by integration of next-gen technology, will enhance the overall effectiveness, enabling quicker response times, improved situational awareness, enhanced collaboration, data-driven decision-making capabilities and consciousness of citizens’ needs.
For example, to manage a pandemic, AI-based analytics and correlation between various data sets such as epidemiological data, healthcare records, social media trends, and other relevant information can detect early signs of a pandemic. This enables faster response in terms of implementing public health measures, resource allocation, and communication strategies.
AI can learn from real-time data and past experiences, and continues to improve the prediction and response capabilities of a command center. The continuous learning enables the command center to adapt strategies, refine predictions, and improve overall crisis management capabilities over time.
How will you balance views that an AI-driven command center will make people feel more vulnerable, almost like someone is keeping an eye on them constantly?
Ensuring privacy and mitigating concerns about constant surveillance in command centers are essential considerations to make people feel safe, secure and not vulnerable. Command centers are tech-powered and are increasingly leveraging data anonymization, purpose limitation, and also have dedicated functional units for promulgating the use of ethical AI across its service delivery value chain.
Let us take an example of a city’s bus station, which utilizes AI-based facial recognition for ensuring safety. The system incorporates face masking to safeguard the privacy of individuals. It is enabled by replacing the facial area with a generic or pixelated mask, ensuring that facial details are obscured while preserving the person’s overall appearance. This is just one of the many use-cases where technology contributes to protecting the privacy of individuals.
Moreover, risk-based surveillance and screening of individuals can also be enabled by employing AI-based algorithms. The essence is to utilize a comprehensive risk assessment framework for risk profiling based on objective criteria, avoiding profiling based solely on personal characteristics like race, religion, or ethnicity.
This can provide an informed input to a targeted approach to surveillance, focusing on specific individuals or groups based on credible intelligence or risk factors. This targeted approach minimizes the impact on the broader population while addressing specific security concerns.
AI can be a double-edged sword. While it can empower us to fight better, it also empowers those who want to cause harm. Can you highlight some of the ways in which cybercriminals have used AI to trap unsuspecting people?
AI technologies have the potential for immense positive impact. But it is crucial to acknowledge that, like any powerful tool, AI can be misused or exploited.
For instance, the development of sophisticated deepfake technology enables the creation of highly convincing fake videos or audio recordings. This allows malicious actors to manipulate and deceive individuals by fabricating events or statements. Automated social engineering attacks, fueled by AI algorithms analyzing vast datasets, can craft personalized and convincing phishing attempts, manipulating individuals into divulging sensitive information. AI-powered cybersecurity threats pose risks through the creation of advanced and adaptive malware. They leverage machine learning to evade traditional defense mechanisms. Autonomous vehicles, guided by AI, could be manipulated to cause intentional accidents or engage in malicious activities. Additionally, AI-driven disinformation campaigns can exploit algorithmic bias to spread fake news, influencing public opinion and potentially disrupting democratic processes.
To address these concerns and mitigate potential harm, it is essential to prioritize ethical AI development, establish robust regulations, foster transparency, and promote responsible use of AI technologies. Additionally, ongoing research, collaboration, and public awareness are crucial in navigating the ethical challenges posed by the misuse of AI.
How do you see AI and social media working together to better inform people? At the moment, we’ve come across videos featuring singers who have long passed away, yet they’re depicted singing contemporary songs. It’s hard to tell the difference. It won’t be long before people use their AI skills to spread misinformation on social media. What are your thoughts?
AI and social media can work together to better inform people. The integration of AI with social media presents a transformative opportunity to enhance information dissemination and curb the spread of misinformation.
AI algorithms can play a pivotal role in content moderation by employing automated fact-checking mechanisms. This can help swiftly identify and flag misleading information. Additionally, these algorithms can personalize user experiences, delivering tailored and relevant content through sentiment analysis and user behavior insights. Predictive analytics powered by AI enable early detection of emerging trends. This ensures timely responses to potential misinformation campaigns or public concerns. Chatbots, utilizing natural language processing, can engage with users, providing accurate information and addressing queries. Furthermore, visual recognition technologies can assist in verifying multimedia content, ensuring the authenticity of shared images and videos.
By fostering a symbiotic relationship between AI and social media, platforms can contribute to a more informed user base, actively combating the propagation of misinformation and reinforcing the credibility of shared content.
About Rajat Chowdhary
Rajat Chowdhary has been working with PwC since 2019. He earlier held several positions with PwC India, where he worked for 11 years before moving to the UAE.
Prior to working with PwC, Rajat was with EDS, an HP company, and HCL Technologies.
Rajat holds a B.Tech and a management degree.
For more interviews, click here.