interview – RoboticsBiz https://roboticsbiz.com Everything about robotics and AI Mon, 26 May 2025 07:43:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 AI hallucinations and the future of trust: Insights from Dr. Ja-Naé Duane on navigating risks in AI https://roboticsbiz.com/ai-hallucinations-and-the-future-of-trust-insights-from-dr-ja-nae-duane-on-navigating-risks-in-ai/ Mon, 26 May 2025 07:43:50 +0000 https://roboticsbiz.com/?p=13007 As artificial intelligence continues to shape the future of work, education, and human interaction, so too does concern over its limitations, including the rise of so-called “AI hallucinations,” where AI systems confidently present misinformation. With The New York Times and other major outlets highlighting these risks, how should we balance innovation and responsibility in AI? […]

The post AI hallucinations and the future of trust: Insights from Dr. Ja-Naé Duane on navigating risks in AI appeared first on RoboticsBiz.

]]>
As artificial intelligence continues to shape the future of work, education, and human interaction, so too does concern over its limitations, including the rise of so-called “AI hallucinations,” where AI systems confidently present misinformation. With The New York Times and other major outlets highlighting these risks, how should we balance innovation and responsibility in AI?

To help us navigate this complex landscape, we sat with Dr. Ja-Naé Duane, an internationally recognized AI expert, behavioral scientist, and futurist. A faculty member at Brown University and a research fellow at MIT’s Center for Information Systems Research, Dr. Duane has spent over two decades helping governments, corporations, and academic institutions harness emerging technologies to build better, more resilient systems.

Her insights have been featured in Fortune, Reworked, AI Journal, and many others. Her latest book, SuperShifts: Transforming How We Live, Learn, and Work in the Age of Intelligence, explores how we can thrive in an era defined by exponential change.

Let’s dive in.

Dr. Ja-Naé Duane
Dr. Ja-Naé Duane – an AI expert, leading behavioral scientist, Brown Faculty, & MIT Research Fellow.

1. How do you assess the systemic risks of AI hallucinations, particularly in high-stakes domains like healthcare, law, or public policy?

When systems confidently generate false, misleading, or entirely fabricated information, AI hallucinations represent a profound and growing systemic risk, especially in high-stakes environments such as healthcare, law, and public policy. These outputs do not arise from malicious intent, but rather from the limitations of large language models, which rely on statistical associations rather than factual understanding. In healthcare, the consequences can be life-threatening. Misdiagnoses, hallucinated symptoms, and incorrect treatment suggestions jeopardize patient safety, increase liability, and erode trust in clinical AI systems.

In legal settings, hallucinations can distort judicial outcomes, particularly when systems fabricate precedents or misquote legal statutes, thereby undermining the fairness and integrity of decisions. In public policy, inaccurate or fabricated data can mislead government responses, distort public records, and create vulnerabilities that malicious actors might exploit. Unlike traditional misinformation, which often stems from human intent, AI hallucinations are more challenging to detect because they are generated with confidence and plausibility. This makes them more insidious and less likely to be noticed in fast-paced decision-making environments.

The broader implications extend beyond individual errors and impact societal trust in institutions and the legitimacy of data-driven systems. To address these risks, we require rigorous validation, real-time monitoring, precise human oversight mechanisms, and regulatory frameworks specifically designed to handle AI’s unique failure modes. Hallucinations are not merely technical glitches. They are structural vulnerabilities with far-reaching consequences that demand deliberate and coordinated mitigation.

2. Are organizations sufficiently prepared to detect and mitigate AI errors now, or are they moving too quickly without safeguards?

Organizations today are in a precarious transition. Many are rushing to implement AI systems for efficiency, automation, and a competitive advantage. Still, few are adequately prepared to detect and mitigate the errors that can arise. While advances in enterprise AI risk management are emerging, such as using AI to anticipate threats, flag anomalies, or automate compliance, most existing risk frameworks were not built with AI’s complexity in mind. They lag in key areas like data governance, oversight protocols, and real-time monitoring. Many organizations still rely on siloed teams and outdated manual processes that fail to detect subtle or evolving risks inherent in AI models. Compounding the problem is the widespread lack of AI-ready data, which undermines model performance and increases the likelihood of errors going unnoticed.

Security vulnerabilities such as model poisoning and prompt injection attacks require new forms of technical defense that most enterprises have not yet adopted. Moreover, human oversight, the critical last line of defense, is often underdeveloped or under-resourced. While organizations are moving with urgency, this speed usually comes at the expense of safety. Overconfidence in traditional analytics or a failure to understand AI-specific risks can lead to costly mistakes, reputational damage, or regulatory exposure. As AI continues to evolve, so must the systems and mindsets that govern it. Until safeguards are embedded into the core of organizational AI strategies, the current pace of adoption may be outstripping our capacity to use these tools wisely and safely.

3. How do you view the psychological impact of AI-generated misinformation on users who may not fully understand the technology’s limitations?

The psychological impact of AI-generated misinformation is significant and deeply concerning, especially for individuals who lack the technical background to understand how these systems work or how their outputs are generated. When AI presents inaccurate information with the same confidence as factual content, it becomes increasingly complex for users to distinguish between truth and fiction. This ambiguity breeds confusion, fear, and anxiety. It also contributes to cognitive overload, as people are forced to navigate a complex digital environment where even trusted systems may not be reliable. Studies show that exposure to AI-generated fake news is associated with decreased media trust, increased polarization, and antisocial behavior. Users may develop cynicism, helplessness, or apathy toward information systems in this climate. This erosion of trust does not stop at AI. It spills over into institutions, news outlets, and public discourse. We are building trust in AI on uncertain foundations; the consequences are already visible.

Public confidence is being undermined by misinformation and a lack of transparency, inconsistent governance, and the opaque nature of many AI systems. Media coverage that sensationalizes or oversimplifies the risks only adds to the confusion. To restore trust and mitigate psychological harm, we must enhance public understanding of AI’s limitations, invest in media literacy, and establish clear ethical guidelines. Without these measures, misinformation’s emotional and cognitive toll will continue to grow, weakening societal resilience when clarity and trust are more vital than ever.

4. What responsibility do developers and institutions bear in shaping the narrative and governance of AI?

In SuperShifts, we emphasize that developers and institutions are not merely participants in AI’s evolution. They are its architects. As AI becomes increasingly embedded in how we live and work, the choices made by those building and governing these systems will shape the future’s moral, social, and institutional frameworks. Developers are responsible for designing systems that are not only technically robust but also ethically grounded. This means embedding human values such as dignity, equity, and transparency into the very foundations of the technology.

Institutions must also rise to the challenge of developing adaptive governance models that can keep pace with the rapid pace of innovation. That includes fostering cross-sector collaboration, involving diverse stakeholders in decision-making, and ensuring that the narratives surrounding AI are shaped by empathy and foresight rather than fear or hype. As SuperShifts explores through themes like IntelliFusion and SocialQuake, the convergence of human and machine intelligence is as much a cultural transformation as a technological one. If the dominant story becomes one of obsolescence or loss of control, we risk creating resistance, fear, and exclusion. However, if institutions frame AI as a collaborative and transformative tool that empowers humans and strengthens communities, we can build public trust and guide AI toward a more inclusive future. This is not just about regulation or design. It calls for wisdom, imagination, and collective responsibility from innovation’s helm.

5. What practical steps should be prioritized to ensure AI evolves as a tool for collaboration rather than confusion or harm?

To ensure AI matures as a collaborative force rather than a source of confusion or harm, we need a coordinated set of practical actions across policy, education, and industry. On the policy front, governments should prioritize regulatory frameworks that categorize AI applications based on their level of risk. High-impact healthcare, finance, and law enforcement systems must meet stricter safety, transparency, and human oversight standards. Regulation must be both anticipatory and adaptive, keeping pace with rapid technological advancement while grounding its protections in fundamental rights.

Policymakers should also promote international cooperation to prevent fragmented oversight and ensure that global AI systems adhere to consistent and ethical standards. We must begin preparing people to live and work with AI by integrating AI literacy into school curricula. Educators need the tools and training to use AI responsibly, and students should have a voice in shaping the policies that govern its use in their learning environments. Companies must conduct routine audits within the industry to detect bias, validate safety, and ensure compliance with evolving standards. They should also build transparency into their systems, allowing users to understand how AI makes decisions and intervene when necessary.

Most importantly, businesses must engage in ongoing conversations with regulators, researchers, and communities to align their innovation with societal expectations. Without this shared approach, AI may deepen inequality and confusion. However, with care, cooperation, and intentional design, we can build a future where AI enhances human potential and becomes a trusted partner in shaping a more resilient and intelligent world.

The post AI hallucinations and the future of trust: Insights from Dr. Ja-Naé Duane on navigating risks in AI appeared first on RoboticsBiz.

]]>
How to ace a cloud engineer interview: A comprehensive guide https://roboticsbiz.com/how-to-ace-a-cloud-engineer-interview-a-comprehensive-guide/ Tue, 04 Mar 2025 09:34:07 +0000 https://roboticsbiz.com/?p=12507 Landing a cloud engineering role can be both exciting and daunting. Technical interviews are designed to test not just your technical knowledge but also your ability to think critically, solve problems, and communicate effectively. In a recent round of interviews for a junior Cloud Engineer intern at Learn to Cloud, candidates were assessed on four […]

The post How to ace a cloud engineer interview: A comprehensive guide appeared first on RoboticsBiz.

]]>
Landing a cloud engineering role can be both exciting and daunting. Technical interviews are designed to test not just your technical knowledge but also your ability to think critically, solve problems, and communicate effectively. In a recent round of interviews for a junior Cloud Engineer intern at Learn to Cloud, candidates were assessed on four key tasks.

This article dives deep into these tasks, offering insights into how to prepare effectively, what hiring managers look for, and how to showcase your skills confidently. Whether you’re an aspiring cloud engineer or looking to refine your interview skills, this guide will help you approach technical interviews with clarity and confidence.

The Four Key Interview Tasks

During the interview process, candidates were required to complete four main exercises:

  • Debugging a provided FastAPI application.
  • Whiteboarding and explaining their submitted project.
  • Identifying the function of a provided Bash script.
  • Whiteboarding a migration of a Learn to Cloud capstone project.

Each task was designed to evaluate different technical and problem-solving skills. Let’s break them down one by one.

Task 1: Identifying the Function of a Bash Script

Why This Matters

Bash scripting is a fundamental skill for cloud engineers. Many cloud-related tasks involve automating processes using Bash scripts. Understanding and debugging these scripts is crucial in real-world scenarios.

How to Approach It

Candidates were given a Bash script and asked to determine its purpose. The script contained key elements such as:

  • Variables
  • Functions
  • A case statement handling start, stop, restart, and status commands
  • Log messages for troubleshooting

Strategy for Success

  • Understand the Big Picture – Instead of analyzing line by line, first look at the structure of the script. Identify key functions and determine their role.
  • Find Clues in Log Messages – Log messages often describe what each part of the script is doing.
  • Break Down Functions – Identify which parts of the script handle dependencies, process execution, and logging.
  • Use Tools – Tools like ChatGPT can provide useful explanations when analyzing a script.

By following this structured approach, candidates could deduce that the script was monitoring a directory for new MP4 files and extracting audio from them.

Task 2: Whiteboarding and Explaining Your Project

Why This Matters

Cloud engineers must be able to communicate their ideas clearly. Being able to break down a project and explain its architecture shows both technical understanding and communication skills.

How to Approach It

Candidates were asked to present and explain a project they had previously worked on. A strong response included:

  • High-Level Overview: What the project does and why it was built.
  • Architecture Breakdown: Components used (e.g., APIs, databases, cloud services).
  • Technology Stack: Frontend, backend, and cloud tools involved.
  • Challenges & Solutions: Obstacles faced and how they were resolved.
  • Future Improvements: Potential enhancements or scalability considerations.

Strategy for Success

  • Use Simple Diagrams – A basic sketch of your architecture can make explanations clearer.
  • Explain in a Logical Flow – Start with the problem, then explain how your solution works.
  • Highlight Key Features – Focus on the most important aspects rather than overwhelming details.
  • Anticipate Questions – Think about potential follow-ups and be ready to explain your choices.

Task 3: Debugging a FastAPI Application

Why This Matters

Debugging is a core skill for cloud engineers, as real-world issues often require quick identification and resolution of problems in API-driven applications.

How to Approach It

Candidates were given a FastAPI application with a missing line of code and asked to figure out why pagination wasn’t working.

Strategy for Success

  • Familiarize Yourself with the Codebase – Identify API endpoints and their functionality.
  • Understand the Problem Statement – What is the expected behavior vs. what is happening?
  • Check for Missing Components – Look at comments and existing logic to see what might be absent.
  • Apply Debugging Techniques – Use print statements, error logs, or tools like Postman to simulate API requests.

The missing component in this case was a page variable, which was essential for pagination to work correctly. Candidates who followed a structured approach to debugging were able to identify the issue efficiently.

Task 4: Whiteboarding a Cloud Migration Strategy

Why This Matters

Cloud engineers often work on migrating applications between different deployment models (e.g., serverless to infrastructure-as-a-service). Understanding cloud migration strategies is essential for real-world cloud projects.

How to Approach It

Candidates were asked how they would migrate a project deployed on serverless functions to an infrastructure-as-a-service (IaaS) model.

Strategy for Success

  • Understand the Source Architecture – In this case, a serverless API with a cloud database.
  • Define the Target Architecture – A two-tier application using virtual machines.
  • Consider Scalability & Security – Implement load balancers, security groups, and network segmentation.
  • Use Cloud Best Practices – Ensure high availability, logging, and monitoring are incorporated.

A strong answer included a basic architecture diagram with:

  • API layer running on virtual machines.
  • A database tier on a separate VM.
  • Load balancing and auto-scaling considerations.
  • Security configurations, such as network security groups.

General Tips for Cloud Engineer Interviews

Beyond the technical challenges, soft skills and preparation play a crucial role in interview success. Here are some additional tips:

  1. Assess Your Current Skills: Before your interview, evaluate your strengths and weaknesses. Spend extra time on areas where you feel less confident.
  2. Learn to Communicate Clearly: Technical ability alone isn’t enough—you need to articulate your thought process and reasoning.
  3. Practice Whiteboarding: Being able to visually explain concepts will help you stand out.
  4. Use Online Resources: Platforms like ChatGPT, Postman, and cloud provider documentation can help reinforce your knowledge.
  5. Stay Calm Under Pressure: Interviews can be nerve-wracking, but approaching them as a conversation rather than a test will help you perform better.

Conclusion

Passing a cloud engineer interview requires more than just technical knowledge—it demands structured thinking, problem-solving skills, and clear communication. By preparing in advance and understanding key concepts like Bash scripting, API debugging, and cloud architecture, you can confidently approach your next interview.

Remember, the goal of a technical interview isn’t to trip you up but to evaluate how you think and solve problems. With the right preparation, you’ll not only pass but excel.

Good luck with your cloud engineering journey!

The post How to ace a cloud engineer interview: A comprehensive guide appeared first on RoboticsBiz.

]]>
How to master ‘Explain one of your projects’ during a technical interview https://roboticsbiz.com/how-to-master-explain-one-of-your-projects-during-a-technical-interview/ Fri, 24 Jan 2025 12:12:03 +0000 https://roboticsbiz.com/?p=12380 When preparing for an interview, especially in the tech domain, one question stands out as a crucial step in determining your future with the company: “Can you explain one of your projects?” This question is often the first technical query you will face after the typical “Tell me about yourself” introduction, and it plays a […]

The post How to master ‘Explain one of your projects’ during a technical interview appeared first on RoboticsBiz.

]]>
When preparing for an interview, especially in the tech domain, one question stands out as a crucial step in determining your future with the company: “Can you explain one of your projects?” This question is often the first technical query you will face after the typical “Tell me about yourself” introduction, and it plays a pivotal role in shaping the outcome of your interview. Whether you’re from the world of Generative AI, Machine Learning, or Data Analytics, how you answer this question can significantly influence your chances of advancing in the interview process.

Why “Explain One of Your Projects” is Crucial

The importance of this question cannot be overstated. Interviewers typically spend 5 to 7 seconds scanning your resume before deciding whether you should move forward in the interview process. Within the conversation’s first 15 to 20 minutes, they know whether you’re a good fit for the role. The “explain one of your projects” question holds immense weight out of this short window. This is where you can demonstrate your technical expertise, communication skills, and the relevance of your experience to the job you’re applying for.

The Art of Answering the “Explain One of Your Projects” Question

While it may seem simple, answering this question effectively requires a thoughtful approach. The ideal response should not be rushed into a brief 2-minute explanation, nor should it be so lengthy that it stretches to 30 minutes. A well-structured answer that lasts around 10 to 15 minutes is typically the sweet spot. Here’s how to approach this:

Start with Context and Relevance: Before diving into technical specifics, begin by providing context about the project. Acknowledge the industry or domain you were working in. Make it clear that you understand the company’s focus and, if possible, align your project with their objectives. If you know the company you’re interviewing with specializes in, say, Telecom or Generative AI, frame your explanation around a project you’ve done in that domain to demonstrate relevance.

Explain the Problem and Requirements: Explain the core challenge or problem the project aims to solve. For instance, if your project involved Generative AI, you might say, “One of my recent projects focused on building a chatbot for an e-commerce platform where customers could query product information using natural language.” Ensure your problem is straightforward and relatable to the job description.

Detail the Solution: Now, shift focus to the solution you implemented. This is the meat of your explanation, so go into detail about the methods, technologies, and processes you used. Explain how you approached the problem and the steps you took to develop the solution. For example, if you were working with Generative AI, describe how you used RAGs (Retrieval-Augmented Generation) or vector databases like Pinecone to enhance the chatbot’s ability to provide relevant answers based on real-time data.

For instance: “To address this, we implemented automated web scrapers that gathered product data from various sources, including PDFs and JSON formats. We then chunked this data into smaller segments, converted it into vector embeddings, and stored it in a vector database (Pinecone). The chatbot would then refer to this database when a user asked a question, such as ‘Can I have a laptop under 50,000?'”

Discuss the Technologies and Tools Used: This is where your technical expertise truly shines. If you’re familiar with specific tools, databases, or algorithms, highlight those and explain why they were the best choice for the project. For instance, if you used vector embeddings or discussed the intricacies of vector databases, this is the time to delve into it. If you’re confident in the process, share in-depth knowledge, such as different chunking techniques or how locality-sensitive hashing (LSH) works to speed up data retrieval in vector databases.

Highlight Your Role and Contributions: Don’t just talk about the project as a whole—highlight your specific role. Be clear about what tasks you were responsible for and how your contribution impacted the project’s success. Employers want to know what you brought to the table, so emphasize your direct contributions, whether in data preprocessing, model training, or deployment.

Emphasize Results and Impact: End your explanation by showcasing your work’s results and impact. Did the project lead to increased sales, better customer experience, or improved operational efficiency? Showing that your work had tangible results will leave a lasting impression on the interviewer. For example: “As a result, the chatbot increased customer engagement by 30% within the first month and reduced customer service load by 20%.”

Tips to Ace the Project Explanation

  • Research the Company’s Needs: Tailor your project explanation to match the company’s focus. If the company specializes in Legal AI and you’ve worked on a Generative AI project for product search, connect the dots between your experience and their requirements.
  • Be Confident But Honest: Focus on your strengths. If you’re confident in certain technical aspects, such as vector embeddings or cloud deployments, dive deeper into those topics. If you’re unfamiliar with other areas, avoid getting bogged down in them.
  • Engage with the Interviewer: In face-to-face or online interviews, ask if you can share your screen or use a whiteboard to explain the project. Visuals can enhance understanding and demonstrate your ability to communicate complex ideas effectively.
  • Prepare for Follow-up Questions: After presenting your project, the interviewer may have questions to dig deeper. Be prepared for questions like:
    • “Why did you choose Pinecone over other vector databases?”
    • “How did you optimize the performance of the chatbot?”
    • “Can you explain how you handled data preprocessing for this project?”
  • Keep the Focus on the Outcome: Interviewers are interested in how your work will contribute to their organization. Always explain how your project solved a problem or created value for the client or company.

Conclusion

Mastering the “Explain one of your projects” question is essential to acing your interview. By strategically framing your response, focusing on relevant experiences, and confidently explaining your role and the technologies used, you’ll increase your chances of standing out as a top candidate. Remember to tailor your examples to the company’s needs, focus on what you know best, and most importantly, showcase how your work can make an impact. With the proper preparation and mindset, you’ll be ready to confidently answer this key question and move forward in the interview process.

The post How to master ‘Explain one of your projects’ during a technical interview appeared first on RoboticsBiz.

]]>
Intelligent automation in foodservice: Interview with Miso Robotics’ CEO Mike Bell https://roboticsbiz.com/intelligent-automation-in-foodservice-interview-with-miso-robotics-ceo-mike-bell/ https://roboticsbiz.com/intelligent-automation-in-foodservice-interview-with-miso-robotics-ceo-mike-bell/#respond Sat, 30 Oct 2021 07:17:09 +0000 https://roboticsbiz.com/?p=6158 The foodservice industry has witnessed a major disruption recently in the wake of lockdowns and economic slowdown, triggered by an unprecedented global health crisis – the COVID-19. The industry came to a standstill with a partial and even permanent closure of countless food production facilities around the globe, restricted food trade policies, financial pressures in […]

The post Intelligent automation in foodservice: Interview with Miso Robotics’ CEO Mike Bell appeared first on RoboticsBiz.

]]>
The foodservice industry has witnessed a major disruption recently in the wake of lockdowns and economic slowdown, triggered by an unprecedented global health crisis – the COVID-19.

The industry came to a standstill with a partial and even permanent closure of countless food production facilities around the globe, restricted food trade policies, financial pressures in the food supply chain, movement restrictions of workers, and changes in demand of consumers.

Estimates suggest that more than 15% of restaurants permanently closed in the US, while in Europe and the Middle East, permanent closures are estimated to be around 30% and even as high as 50%.

To overcome the ongoing challenges in the food supply chain, foodservice startups like Miso Robotics are stepping up to innovate and contribute in many ways to the sustainability of the restaurant business in the future.

Miso Robotics, for instance, is revolutionizing commercial foodservice with innovative Robots-as-a-Service and intelligent automation solutions that make restaurants safer, easier, and friendlier. We recently had a chat with Mike Bell, Chief Executive Officer of Miso Robotics, to know more about how intelligent automation, including AI and robotics, fundamentally impacts the end-to-end restaurant operational model.

Mike Bell sets the overall strategic direction for Miso Robotics and oversees the operations. A veteran technology executive and entrepreneur for more than 25 years, he has held C-suite roles at early-stage technology startups including Software.com, Encore, and, most recently, Ordermark, where he was COO. His expertise is in scaling emerging companies and building them into commercially- viable, rapidly-growing organizations.

You can read the complete interview below.

1. Automation has been an essential part of the food industry in the past few decades. The increasing demand for processed foods and the growing production volume of food products has significantly surged the need for automation in the industry. Yet, many thought that the so-called food robots and autonomous food platforms were things of the distant future. Unexpectedly, the COVID-19 has dramatically accelerated the timeline for advanced food robotics. Can you tell us about the ways how the pandemic has completely altered the landscape of food management in a push for contactless food prep and service?

Miso Robotics’ CEO Mike Bell
Miso Robotics’ CEO Mike Bell

Over the past year and a half, the restaurant industry was forced into a new set of dynamics to stay operational. Restaurants are waking up to the potential to optimize, cut costs and unearth insights that drive new profits. The delivery, takeout, and drive-thru-first culture exploded, and with it came an increased demand for speed, quality, and consistency. That all starts in the back-of-house, where efficiency has historically relied on having a full staff with a strong execution skillset.

Unfortunately, the lack of labor has made efficiency impossible. Even before the pandemic hit, staff shortages were making operational success difficult – and now we are seeing an all-time high number of open positions and amplified turnover rates. Across the US, restaurants are operating with 2.8 fewer employees in the front-of-house and 6.2 fewer employees in the back-of-house – something that’s making meeting customer service expectations and revenue targets extremely challenging (Black Box Intelligence, 2021).

That’s where Miso Robotics comes in. We seek to offload some of the labor-intensive, dangerous, and repetitive tasks in the back-of-house, freeing up humans to focus their efforts on human tasks like customer service. We believe that our intelligent automation approach can bring operators the speed, quality, and consistency to succeed in this new landscape.

2. What are the key areas robotics is slowly becoming a critical component of today’s everyday food experiences? What are the key benefits that these robotic solutions bring to food managers?

It’s all about survival, and restaurant operators know it. They are turning to robotics and automation and asking us how to bring in technology to drive efficiency. The labor shortage has been one of the leading challenges for the industry, and it’s making a recovery nearly impossible for these businesses. While operators know that technology is the solution, it hasn’t quite become the standard for the industry – especially in the back-of-house. There’s not a lot of companies doing anything like what Miso is doing, and we are the only ones building for these challenges.

Scaling to meet the needs of the industry has been a long road, and you need the right team in place to develop the technology. At Miso, our goal is to get operators the solutions they need as quickly as possible. Not all operators are alike. Some need Flippy to handle the fry station, while others may just need CookRight to monitor the quality of their food. We want to give every operator the solutions they need to put them in a position to succeed and recover.

3. What are the significant challenges in the adoption of food robotics for various applications in the foodservice industry?

Every business has its operational challenges and unique needs, so any deployed food robotics will have to accommodate and understand those. Fortunately for Miso, our technology can be installed in any kitchen and ready to use within 24 hours without disrupting operations. Our product line tackles every aspect of food robotics applications in the commercial kitchen by providing future-focused solutions operators need to reduce costs and improve customer service for a better dining experience.

4. Tell us about Miso Robotics. How does it transform the foodservice industry with intelligent automation?

Miso Robotics brings AI-powered automation to commercial kitchens through robot-as-a-service (RaaS) and software-as-a-service (SaaS) offerings to address the biggest challenges for the foodservice industry today. Intelligent automation brings additional eyes and brains to the back-of-house, helping commercial kitchens increase production, optimize and track operations, and improve health and safety practices through new technology efficiency.

We currently have three core product lines – Flippy (RaaS, now with our new Wings variation), CookRight (SaaS), and an Automated Beverage Dispenser built in partnership with a global manufacturer, Lancer Worldwide. Each of these products tackles a specific need for the commercial kitchen and will help maximize operational efficiency.

5. Tell us about your new product line, Flippy Wings.

Flippy Wings is specially designed to help kitchen staff handle and cook chicken items more safely and efficiently by reducing the number of human touchpoints and dropping chicken items into the hot oil until they’re cooked to perfection. Since it reduces the number of touchpoints and can dispense and cook chicken items more precisely, Flippy Wings can also significantly reduce food waste in the kitchen.

So, the way it works is:

  • An automatic dispenser dispenses the exact amount of food that needs to be cooked based on the timing and size of the order
  • Flippy Wings’ AI and image recognition capabilities automatically identify the food item, so Flippy can program itself to cook a specific food item depending on its size and composition
  • A robotic arm picks the food bin, drops it in hot oil, and shakes it to remove any excess oil
  • The arm drops the bin into a hot holding area for a kitchen worker to pack and serve

Miso’s tests show a 10-20% overall increase in food production speeds when deploying the machine. With better rates of cook-time accuracy, Flippy Wings will allow brands to consistently deliver unmatched quality. So, with this new solution, kitchen staff can cook chicken items more safely and precisely while spending less time attending to the deep fryer.

6. What are the top biggest food technology trends right now, and how will they impact the restaurants and end-to-end food supply chain in the next five years?

One of the main trends that Miso is at the forefront of is the deployment of robotics and intelligent automation into QSR (quick-service restaurant) kitchens. As noted above, many of the roles in the back-of-house tend to be repetitive, painstaking, and, at times, dangerous (e.g., when in close proximity to the hot oil at the fry station), which, when paired with existing labor challenges, makes it nearly impossible for operators to fill. So, operators are turning to food technology rapidly to maximize efficiency and supplement the lack of labor. The implementation of robotics in QSRs will only continue to increase over the next five years, which will have a positive effect on the end-to-end food supply chain.

As a result of the pandemic, consumer demand for low-touch food options has also increased. Robots can limit the interaction between humans and the food items prepared to ensure consumers get their meals as safe and efficiently as possible. There’s no doubt that food technology is hot right now, and I can’t wait to see the advances made in the not-so-distant future.

The post Intelligent automation in foodservice: Interview with Miso Robotics’ CEO Mike Bell appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/intelligent-automation-in-foodservice-interview-with-miso-robotics-ceo-mike-bell/feed/ 0
Interview with Richard Russo, Jr., interim CEO and CFO of BIONIK https://roboticsbiz.com/interview-with-richard-russo-jr-interim-ceo-and-cfo-of-bionik/ https://roboticsbiz.com/interview-with-richard-russo-jr-interim-ceo-and-cfo-of-bionik/#respond Thu, 21 Oct 2021 04:41:05 +0000 https://roboticsbiz.com/?p=6084 Leading robotics company BIONIK, focusing on providing rehabilitation and assistive technology solutions to people with neurological and mobility challenges, is on the news once again! In September this year, its cloud-based data analytics solution InMotion Connect™ that securely transmits and saves anonymized data from all connected InMotion robotics devices to BIONIK’s cloud server had seen […]

The post Interview with Richard Russo, Jr., interim CEO and CFO of BIONIK appeared first on RoboticsBiz.

]]>
Leading robotics company BIONIK, focusing on providing rehabilitation and assistive technology solutions to people with neurological and mobility challenges, is on the news once again!

In September this year, its cloud-based data analytics solution InMotion Connect™ that securely transmits and saves anonymized data from all connected InMotion robotics devices to BIONIK’s cloud server had seen a significant increase in patient sessions on approximately 280 InMotion® robotic devices nationwide since the launch of the proprietary data platform last year.

The hospitals experienced a 58% increase in total session hours (time spent by patients utilizing InMotion® robots), 47% increase in the total number of patients using InMotion® robots, 65% increase in individual patient treatment time, and 47% increase in the number of patient repetitions (movements).

We recently connected with Richard Russo, Jr., interim CEO and CFO of BIONIK, to learn more about BIONIK’s continued success in navigating technology adoption hurdles, and discuss the best practices around clinical success with new technologies in healthcare.

Before being appointed as interim CEO, Richard was brought into the company as CFO in November 2020. Richard joined BIONIK from ICarbonX, a privately held digital health management company specializing in artificial intelligence and health data, where he held the role of Vice President of Finance and U.S. Chief Financial Officer.

You can read the complete interview below:

1. BIONIK has been in the news recently because of the 72% increase in the patient use of its InMotion robotic devices nationwide. There has been significant growth for the last three quarters. Could you walk us through the key growth/performance indicators from Q4 2020 through Q2 of 2021?

It’s been an exciting time for BIONIK as the company’s robotic devices continue to be added to rehabilitation centers across the country. Our partnership with Kindred, which manages over 125 hospitals nationwide, has continued to give us the opportunity to install our InMotion robotic devices within their Inpatient Rehabilitation Facilities and to train hospital staff on how to utilize the technology. It’s with this partnership we have been able to identify an increase in patient sessions on our InMotion Connect cloud-based data analytics solution, as well as an increase in total session hours (time spent by patients utilizing the device), an increase in the total number of patients using the device, and an increase in patient repetitions (movements) when using an InMotion robot. These are major milestones for BIONIK, as these positive upward trends show how the company is enhancing technology adoption at our partner facilities.

We also continued our work with existing partners, including the VA Rehabilitation Research & Development funded Center for Neurorestoration and Neurotechnology, known as CfNN, who purchased an additional InMotion Arm/Hand Interactive Therapy System. These lasting partnerships allow us to continue working together to find ways to leverage neuroplasticity for faster recovery for those with neurological conditions or injuries.

2. Tell us about BIONIK’s innovative technology and how it continues to transform healthcare facilities and the quality of patient care.

How patients recover from a neurological injury is evolving thanks to the technology BIONIK has put forth based on research led by MIT. BIONIK’s InMotion robots provide effective, patient-adaptive therapy to restore upper-extremity motor control for various neurological conditions and recovery stages and acute stroke. Driving positive patient outcomes is at the core of what we do. BIONIK’s robotic devices provide clinical teams with the technology and robotic assistance to guide their patients to a path of recovery. With BIONIK’s focus on patient data, occupational therapists can better identify patient recovery milestones and areas of improvement to provide the utmost care possible.

3. Could you elaborate on the specific tasks that the InMotion robotic systems can guide the patients through, improving the motor control of their arms and hands?

InMotion Robotic systems efficiently deliver intensive motor therapy to help patients regain motor function following a neurological injury.  BIONIK’s devices help increase the quality and quantity of movements completed during a physical therapy session to build neuroplasticity. A conventional session between a therapist and patient will have about 30 to 60 movements in an hour. This hands-on approach can be exhausting for both the therapist and the patient. With the InMotion Robotic systems, we’re seeing 600 to 1,000 movements in an hour, guided by the therapist while the device assists the patient with each move. The positions and the speed of these repetitive movements are measured in real-time, allowing the therapist to see even the smallest improvement that may be invisible to the human eye. For the patient, being able to see these improvements is essential to motivating them to continue the journey to recovery.

4. BIONIK currently has approximately 280 InMotion robotic systems in use to help stroke patients. However, technology adoption in healthcare is often one of the biggest barriers to success within clinics. How do you navigate through the hurdles?

With our cloud platform InMotion Connect, we’re bridging the gap between clinicians, hospital management teams, and patients regarding technology adoption. Data is at the core of our products, and when we can quantify our data, we can provide useful analytics across the rehabilitation ecosystem. The InMotion Connect solution provides secure data collection from robotic devices in real-time, stores it in the cloud, and analyzes it. This allows institutions to review data daily to ensure asset utilization and performance for an optimal ROI. On the clinician side, BIONIK’s user-friendly dashboard reports on data collected from each patient session and provides timely insights to support a clinician’s decision-making and ultimately improve the care provided for each patient.

5. In your experience, what are the best practices around clinical success with new technologies in healthcare?

Clinical success is about communication and education when it comes to new technologies. At BIONIK, we’ve put a plan in place to effectively communicate with every partner hospital to ensure their clinical staff is set up for success. BIONIK’s team provides ongoing training sessions around the technology and new research to ensure the devices are continuously utilized to the fullest potential. We ensure every clinician is confident in using the device with their patients and can improve their efficiency and decision-making through ongoing follow-up and on-site coaching. BIONIK also monitors device data to help hospital management teams lead their teams to continuously incorporate the new technology into their sessions. Data and ongoing training are core components to ensure success, and BIONIK provides both.

The post Interview with Richard Russo, Jr., interim CEO and CFO of BIONIK appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/interview-with-richard-russo-jr-interim-ceo-and-cfo-of-bionik/feed/ 0
Interview with Raffi Holzer, CEO of Avvir, about automation in construction https://roboticsbiz.com/interview-with-raffi-holzer-ceo-of-avvir-about-automation-in-construction/ https://roboticsbiz.com/interview-with-raffi-holzer-ceo-of-avvir-about-automation-in-construction/#respond Fri, 15 Oct 2021 16:05:32 +0000 https://roboticsbiz.com/?p=6000 Automation and digitization are revolutionizing nearly every industry in the world, even old-school construction. Contractors and developers are leaning into this technology as it becomes more commonplace, and they see an increased need for enhanced productivity, quality, and safety due to continued urbanization. As a result, the construction robot market will reach $166.4M by 2023, […]

The post Interview with Raffi Holzer, CEO of Avvir, about automation in construction appeared first on RoboticsBiz.

]]>
Automation and digitization are revolutionizing nearly every industry in the world, even old-school construction. Contractors and developers are leaning into this technology as it becomes more commonplace, and they see an increased need for enhanced productivity, quality, and safety due to continued urbanization.

As a result, the construction robot market will reach $166.4M by 2023, up from $76.6M in 2018. From design through final inspection, robotics transforms the way we build the country’s infrastructure, jettisoning construction into the new age. Utilizing these tools significantly alters many crucial tasks, including design translation, 3D printing structures, task repetition, progress tracking, and job site safety.

To know more about the impact and the applications of automation and robotics in construction, we at RoboticsBiz connected with Raffi Holzer – co-founder, and CEO of Avvir, a software platform company based in NYC, reshaping the way project owners and contractors manage construction progress. Avvir employs computer vision and deep learning to provide automated construction verification and progress monitoring.

He was happy to answer our questions and walk us through more details, including use cases and where he sees the industry heading next.

You can read the complete interview below.

1. Architects, engineers, and all other construction process participants integrate advanced construction methods today due to the high complexity of the construction process. What roles do automation play to revolutionize various stages of the construction life-cycle? What are the key benefits?

Raffi Holzer
Raffi Holzer, co-founder, and CEO of Avvir.

Automation at its heart is about extending the productivity of human beings. Within construction, there are a plethora of dangerous, dirty, or simply repetitive tasks that machines or computers would be better suited to handling than human beings. Everything from tying rebar, to shoveling dirt, to hanging drywall could be improved at some level from automation, and there are startups active in each of those spaces today. The opportunity is massive. Construction productivity has stalled, and if we as a society are going to solve problems around things like affordable housing while overcoming a skilled labor shortage, robotics and automation will necessarily have a large part to play.

2. Robots in the construction sites take over the tasks of handling heavy loads, performing dirty or dangerous work, or working at hardly accessible locations and in unfavorable physical positions. Right now, they function as tools of the human being, but can they be developed as intelligent and autonomous tools? Or, are fully automatic systems suitable in construction?

Fully autonomous systems operating on a construction site are certainly a theoretical possibility. But we’re at least as far away from that reality as we are from truly autonomous vehicles. For now, I think the right focus is on automated systems that function as labor augmentation or enhancement. For one, those sorts of systems are simply more achievable in the short term. Second, they are simply less scary to most people worried about the loss of control, and even their job, so they are likely to be more quickly adopted. I think the analogy to vehicles is actually pretty instructive. You start with lane assist and intelligent cruise control. Eventually, you graduate to full autonomy.

3. What are the factors restraining the automation and robotics systems implementation in the construction sites? What are the key short and long-term challenges in the adaptation of robotics technology?

I think the key factors holding automation back in construction are twofold. First, people are scared of automation taking their jobs. The second is accounting. General contractors operate each project independently. Each needs to see an immediate return on investment on any innovation. Unfortunately, innovation doesn’t work that way. The kinks typically need to be worked out, and it often takes a couple of projects to prove our returns. Because each project represents its P&L, there is typically no single stakeholder for whom it makes sense to make that initial investment. This results in institutionalized short-term thinking.

4. Tell us about Avvir and how it solves the existing challenges.

Our technology is focused not on the automation of labor in the classic sense but on the automation of data gathering, analysis, and reporting. We compare reality capture data to building plans and schedules using a set of machine learning algorithms to automatically generate progress reports, earned value reports, and perform QA/QC checks. Avvir’s approach to overcoming the obstacles to adoption starts with acknowledging them. We recognize that we are not going to change the industry alone. We work to identify change agents within the industry and then act as enablers because that is what automation technology can do well: enable and empower people.

5. What are the technological advancements expected to happen in the next 5 to 10 years?

There’s a lot we can expect in the next 5-10 years, and a lot will happen that almost no one will see coming. I think we’ll certainly see project schedules automatically generated from designs and a dramatic decrease in the cost of reality capture. I think we’ll see more robots on-site performing several repetitive activities. We may see meaningful advancements in mixed reality. And there is a fair possibility that we’ll see cranes and other heavy machinery operating themselves.

The post Interview with Raffi Holzer, CEO of Avvir, about automation in construction appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/interview-with-raffi-holzer-ceo-of-avvir-about-automation-in-construction/feed/ 0
Interview with Dan Harden, founder and CEO of Whipsaw https://roboticsbiz.com/interview-with-dan-harden-founder-and-ceo-of-whipsaw/ https://roboticsbiz.com/interview-with-dan-harden-founder-and-ceo-of-whipsaw/#respond Fri, 01 Oct 2021 16:26:29 +0000 https://roboticsbiz.com/?p=5841 We at RoboticsBiz had the privilege of interviewing Dan Harden, who is the founder, CEO, and principal designer of Whipsaw, a highly acclaimed product design and experience innovation firm in Silicon Valley. Dan is the highly active creative force behind Whipsaw and a luminary in the design world. Dan’s passion and experience combined with his […]

The post Interview with Dan Harden, founder and CEO of Whipsaw appeared first on RoboticsBiz.

]]>
We at RoboticsBiz had the privilege of interviewing Dan Harden, who is the founder, CEO, and principal designer of Whipsaw, a highly acclaimed product design and experience innovation firm in Silicon Valley.

Dan is the highly active creative force behind Whipsaw and a luminary in the design world. Dan’s passion and experience combined with his personal philosophies about art, culture, psychology, and technology permeates the work and the brand.

Throughout his prolific career, Dan has designed hundreds of successful products ranging from baby bottles to supercomputers. He has won over 300 design awards and has been granted 500 patents. Fast Company selected Dan as one of the “100 Most Creative People in Business,” calling him “design’s secret weapon.”

Dan previously led design teams at other prominent firms, such as Frogdesign, Henry Dreyfuss Associates, and George Nelson. He has also designed many notable products for heavyweights like Steve Jobs, Larry Ellison, and Rupert Murdoch.

Dan Harden
Dan Harden, who is the founder, CEO, and principal designer of Whipsaw

Dan’s views and work have been featured in Axis, Business Week, CNN, Fast Company, Fortune, Metropolis, Newsweek, Time, Wired, several design books, and in museums such as Smithsonian Cooper Hewitt Museum, The Henry Ford Museum, the Chicago Athenaeum, and Victoria & Albert Museum. He is host of Whipsaw’s design podcast Prism, and a host and judge on the primetime TV series America by Design.

In the interview, Dan mainly spoke about the design, trends and future of robots in a household environment. You can read the complete interview below:

1. Integrating robots into a household environment requires a different approach to robot design than industrial automation. What are the critical design criteria to follow in designing robots for everyday home environments?

The dream of owning your own household service robot that can do multiple complex chores, like Rosie the Robot Maid, has been a stretch goal for decades. For a robot to do the most basic tasks, such as pick up laundry, fetch a drink, or clean a countertop (without crashing into furniture, dropping valuables, spilling milk, or running over your dog) is a herculean engineering effort. The robot needs to know where it is at the moment, where and when it needs to go, how to identify objects, how to retrieve and manipulate those objects, and how to respond to people and obstacles. Developing a domestic robot requires nothing short of symphonic integration where many systems must work together simultaneously.

Bizzy and Martian Robots
Bizzy and Martian Robots

A household robot must have a compact form factor in order to be acceptable in a home. It must be safe enough for small children, and it should look friendly and attractive. Robots will soon become consumer electronic products with typical mass production requirements and competition, so keeping the cost of goods very low compared to industrial robots is also critical.

2. Human-like or realistic appearance of robots often leads to a demand to display a behavior that matches the user’s expectations of the capabilities. Therefore, such designs can put pressure on engineers to develop these behaviors and interactions that can meet the user’s expectations. What is your approach to tackling the gap between design and performance?

You need to first decide what the bot should be doing (its function) based on end user needs, then determine how well it needs to do it (the performance). The look and behavior therefore need to follow these performance factors, and appropriately fit it so that the overall feel of the bot is harmonious and natural. It’s also critical to get the behavior of the robot just right in order to be accepted by the user. If it’s not, it comes across as weird or stupid. Its behavior is so important that it should highly influence and sometimes even take precedence over performance decisions.

3. Tell us about the choice of soft materials and hardware, as designers, to enhance social robots’ expressiveness and effective feedback.

Material alone can sometimes carry a design, giving it extraordinary tactility, beauty, and quality. Material can often be the first identity of a product, the same way a paint’s application can define a painting. Material and design should be inseparable. Since robots are easily anthropomorphized, materials such as textiles and rubber play a big role in making them interesting and friendly. Softer materials can also help bots be less damaging on the furniture as they stroll about the house.

4. Human-inspired, pet-like, or alien…What must a robot look like? Tell us about the trends in the market.

A domestic robot will only be accepted by users if it looks right. In every robot project we ask ourselves: Is it better to be functional, honest, and minimal, or have it be more expressive and verging on human looking? Our opinion is usually to make it what it wants to be–a purposeful and efficient tool with self-explanatory design cues and details. We envision them to be more like advanced appliances rather than pets.

KODA Robot Dog

However, as we start designing, we soon realize that it is sometimes difficult not to look like some type of creature. By the time you put cameras where they need to be, arms that can reach and lift, and hands to grasp objects, it starts to look a little animal-like. Our concepts sometimes tend to imitate nature’s evolution out of necessity. We embrace this logical consequence, and usually let these necessary elements define the robot’s identity. Enhancing these “creature effects” only helps to give robots more character.

5. Can you elaborate on the opportunities industrial designers have in today’s domestic service robot market?

The essential technology to make a domestic robot safe, useful, and affordable is almost here. Their destiny toward home use is imminent, and likely to become the most prominent domain yet for robots. Home robots are becoming one of the most exciting product categories for designers to work on because they are the ultimate synthesis of hardware, software, AI, and human-machine interaction.

They also possess nuance, which we designers love to craft. They have a peculiar mix of movement, mannerism, and countenance–and these preternatural subtleties present a difficult and therefore worthwhile design problem. To infuse an artifact with a behavior that has been crafted and designed is also simply fascinating. The industrial design profession should be excited. It has never had a better quest or more interesting subject than the domestic robot.

6. How are industrial designers able to support robotic scientists in their research activities?

Industrial designers are great at synthesizing scientific research and data into products that appeal to people. Making the robots relevant to people’s needs, making them fit within the home environment, making them easy to operate, and of course, attractive. Designers are also great at establishing clear objectives for what a robot should be doing in the first place so that consumers ultimately say, “Wow I need that because I have that problem, and the bot can help me solve it.”

Designers are always thinking about the human condition, which in almost every case is needed to compliment the science. Industrial designers also create the necessary specifications for mass production, including all Computer Aided Design (CAD) files, finish specifications, material specification, and production processes.

The post Interview with Dan Harden, founder and CEO of Whipsaw appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/interview-with-dan-harden-founder-and-ceo-of-whipsaw/feed/ 0
3D cameras revolutionize companion robots – David Chen of Orbbec [Interview] https://roboticsbiz.com/3d-cameras-revolutionize-companion-robots-david-chen-of-orbbec-interview/ https://roboticsbiz.com/3d-cameras-revolutionize-companion-robots-david-chen-of-orbbec-interview/#respond Tue, 31 Aug 2021 04:57:19 +0000 https://roboticsbiz.com/?p=5830 Imagine a friend that can diagnosis your illnesses, play with your children, and follows you around to provide a cold drink whenever you need it. These kinds of friends exist – they just aren’t humans. From Heineken’s beer dispensing droid to the AI-enabled physician’s helper, companion robots seamlessly fulfill specific human needs and functions. How? 3D cameras. David Chen, […]

The post 3D cameras revolutionize companion robots – David Chen of Orbbec [Interview] appeared first on RoboticsBiz.

]]>
Imagine a friend that can diagnosis your illnesses, play with your children, and follows you around to provide a cold drink whenever you need it. These kinds of friends exist – they just aren’t humans.

From Heineken’s beer dispensing droid to the AI-enabled physician’s helper, companion robots seamlessly fulfill specific human needs and functions. How? 3D cameras.

David Chen, director of engineering at Orbbec, says 3D cameras are what allow these digital sidekicks to analyze, navigate and monitor both their environment and the humans around them.

In an exclusive interview with RoboticsBiz, David Chen predicted that while companion robots are currently playing a specific and important role in assisting the elderly and those with disabilities, we are not off out from robots providing multifunctional roles in most households. Chen believes that the advancements in 3D cameras will continue to enhance the capabilities of companion robots.

Read the complete interview below:

1. 3D imaging is one of the hot topics in robotics today. Unlike 2D, 3D vision allows a robot to detect the orientation and recognize an object with complex geometries and reflective properties in low light conditions. Can you give us an idea about the new breakthroughs/advancements happening in 3D sensing technology today?

Depth-sensing solutions have already been widely used in robotics technology, but typically people are referencing LiDAR technology. This is a point or line-based technology which has been used for some time combined with RGB/2D vision systems. This combination provides robots with the measurements needed to recognize and navigate their environment.

The downside of LiDAR is always related to the performance-cost ratio. Full-field high-performance LiDAR usually has a higher price compared to the low-cost LiDARS, which are usually only capable of single point (scanning) measurement.

What we are seeing right now is 3D cameras gradually taking the place of conventional RGB/2D cameras and LiDAR combination models. 3D cameras provide a much more detailed and reliable vision system with the ability to gather three-dimensional full-field information, which robots can then use to analyze with a high level of accuracy. The imaging ability of 3D cameras is constantly improving – with the ability to detect and analyze further distances and capture accurate images with fewer concerns for environmental conditions. On top of that, 3D cameras are more competitive in cost while providing similar functions to the current RGB/2D camera and LiDAR combination.

Now for the robotics industry, there are specific 3D advancements being made that are very exciting. One of the most important advancements is deep learning-based 3D reconstruction. Unlike conventional 3D reconstruction methods, deep learning-based 3D reconstruction can provide very detailed results with an acceptable accuracy rate – while conventional 3D reconstruction can provide higher accuracy but less detailed 3D data. Such detailed 3D data for tracking and recognizing are urgently needed by robotic developers.

2. In May, Orbbec announced a collaboration with Microsoft to introduce new cameras in 2022 to satisfy the burgeoning demand for 3D sensing technology. Can you tell us about this collaboration?

This is a very exciting time at Orbbec as we begin a great partnership with Microsoft. We are collaborating with them to develop a new series of high-performance 3D cameras for developers and solution providers worldwide.

Encompassing Microsoft’s market-leading 3D sensor technology and Orbbec’s manufacturing and unique embedded computing design capabilities, we will introduce new ToF cameras next year. The camera will securely connect to the Microsoft Azure cloud platform and leverage device management, data streaming, and AI analytics capabilities that can run advanced depth vision algorithms and use onboard computing to convert raw data into precise depth images. Designed for advanced human/machine interface, robotics, 3D scanning, and surveillance use, as well as gaming and other consumer applications.

3. What role does Orbbec play in developing companion and service robots, especially in human-robotic interaction?

Orbbec’s main task in this industry is to provide early-stage imaging for robotic vision systems. We are creating scenario-oriented cameras that are specifically designed to help a robot complete its programmed task. What makes our solutions unique is that we can also incorporate embedded computing solutions in our camera hardware, such as single-board computers, to help the customer build a customized vision system.

We are currently working on a series of accessories that can accompany these systems. For example, precisely manufactured optical frames/mounts will help the robot builder or manufacturer set up the vision system more easily. And, we are also able to provide some manufacturing services to those in the robot companion and service industries – especially for smaller companies lacking supply chain capabilities.

4. Can you tell us about your new Time-of-Flight (ToF) camera product line? 

Time of Flight (ToF) depth technology has evolved over the last two decades and is commonly being used for SLAM, navigation, inspection, tracking, object identification, and obstacle avoidance. It is a powerful vision technology for assisting in the evolution of the robotics industry.
Earlier this year, we introduced a new ToF sensor, Femto, that can scan objects with high accuracy at a depth-of-field ranging from 0.2 to 5 meters. Its state-of-the-art depth algorithm makes it perfect for all kinds of applications. This expertly extends the use of 3D imaging for a wide range of environmental requirements, such as high-temperature environments and complete darkness.

Orbbec is unique in the marketplace for its ability to deliver ToF cameras with embedded computing capability. Unlike many rival ToF cameras, Orbbec Femto can output high-quality depth data without other external computing capabilities, giving users significantly greater design and application flexibility.

5. Do you have any predictions about the future of 3D sensing technology? What will the future look like in the next five years? 

In five years, 30 percent of RGB/2D cameras will be replaced with 3D cameras. 3D depth cameras provide one more dimension of image details for robots to “see,” which is particularly critical for the industry. Additionally, 3D systems address growing privacy concerns. Anyone can read or view a 2D image. But 3D images rely on a point cloud system which makes personal information (images) no longer required. Therefore, 3D vision systems provide an extra layer of privacy that people will be looking for in their imaging solutions.

As more depth cameras begin to replace traditional RGB cameras, the cost and barrier for advanced technology solutions, like companion or service robots, will drop. This gives more industries, at every level, the opportunity to incorporate these solutions into their business.

The post 3D cameras revolutionize companion robots – David Chen of Orbbec [Interview] appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/3d-cameras-revolutionize-companion-robots-david-chen-of-orbbec-interview/feed/ 0
Interview with John Suit, advising CTO at robotic dog company KODA https://roboticsbiz.com/interview-with-john-suit-advising-cto-at-robotic-dog-company-koda/ https://roboticsbiz.com/interview-with-john-suit-advising-cto-at-robotic-dog-company-koda/#respond Tue, 16 Feb 2021 14:16:11 +0000 https://roboticsbiz.com/?p=4701 Robots are already a familiar sight in our factories and warehouses, where they continue to win terrain. They typically operate in highly structured environments and have only limited interactions with humans. Now, with an accumulation of innovations in artificial intelligence (AI), sensors, and battery technology, robots have acquired the ability to enter our workplaces and […]

The post Interview with John Suit, advising CTO at robotic dog company KODA appeared first on RoboticsBiz.

]]>
Robots are already a familiar sight in our factories and warehouses, where they continue to win terrain. They typically operate in highly structured environments and have only limited interactions with humans.

Now, with an accumulation of innovations in artificial intelligence (AI), sensors, and battery technology, robots have acquired the ability to enter our workplaces and homes. These social robots are designed to interact with us and exhibit social behaviors such as recognizing, following, and assisting their owners, and engaging in conversation.

These social robots trigger various discussions, ranging from the dark side of artificial intelligence to the future of work and the impact on social interactions. One of the most interesting ones is whether there will be a recognizable border between humans and robots.

To further discuss the opportunities offered by social robots and the potential impact on our social life, we are glad to introduce John Suit, who is working as the advising chief technology officer at robotic dog company KODA.

We had a chance to interview John Suit, currently responsible for innovation and the technical direction of KODA, Inc, as well as Cyber Reliant. Cyber Reliant is the leading provider of next-generation Cryptographic and Dynamic Content Provisioning technologies for Organizations that require the most advanced content assurance and secure mobile communications.

John Suit, advising CTO at robotic dog company KODA
John Suit, advising CTO at robotic dog company KODA

Prior to Cyber Reliant, John was most recently the CTO of Xceedium Inc., which was acquired by CA Technologies. Prior to Xceedium, John co-founded Nano Network Engines, which became Fortisphere. Before Fortisphere, John was the VP of Advanced Technology for Cloakware. John holds over 40 U.S. and International Patents in Information Security Analysis and Securing Cloud Environments, with several additional patents pending.

You can read the complete interview below:

1. Over the last several years, there is an increasing interest in Human-Robot Interaction (HRI) due to the rising usage of robots in industrial fields and in other areas as schools, homes, hospitals, and rehabilitation centers. Therefore, there is a growing need for developing behavioral models for social robots to have a high-quality interaction and acceptability in providing useful and efficient services. However, it is also true that how people will accept, perceive, interact, and cooperate with these machines in their life as social beings are still somewhat unknown. What is the current social perception of such socially skillful, artificially intelligent machines? Will robots be the companions of the future?

Robots with human characteristics and robots that are meant to emulate “cute and cuddly” animals will have very different paths towards acceptance; this path mirrors the differences humans meet both other humans and a visibly friendly animal. When we meet a stranger, there is often a period of time where the trust needs to be earned, and a relationship needs to be established. This trial period is extended when a human comes into contact with a humanoid robot.

When a dog approaches someone, playfully wagging its tail, we’re more likely to let our guard down and play along. I believe that robotic animals, designed to do everything a mobile device can do, in addition to socializing with humans, providing physical guidance, companionship, and even protection, will be more quickly accepted by a new owner or user.

Sci-fi has given us Skynet and replicants, with the caveat that this is all fantasy. But when that fictional future looks closer to reality, there is a collective concern we start to feel. Before we’ve even come into contact with a humanoid robot, we’re already coming from a place of fear. When these robots become more commonplace, the trust that they will need to overcome far exceeds the trust currently necessary for humans to form relationships with strangers.

2. To achieve fluent and effective human-like communication, social robots must be endowed with the capability to understand feelings, intentions, and beliefs of the user, which are not only directly expressed by the user, but that is also shaped by bodily cues (i.e., gaze, posture, facial expressions) and vocal cues (i.e., vocal tones and expressions). Could you talk about the key advancements in empowering robots with cognitive and affective capabilities to establish empathetic relationships with users?

There have been tremendous advancements in capabilities to understand feelings, intentions, and beliefs of the user, which are not only directly expressed by the user, but that is also shaped by bodily cues (i.e., gaze, posture, facial expressions) and vocal cues (i.e., vocal tones and expressions) This evolved out of both facial recognition systems, initially intended to identify an individual and attempt to determine ill intent or nefarious purposes on behalf of the individual being observed. Mood systems evolved from multi-camera and temperature sensors coupled with algorithms that have made it all the way to social media mood displays for entertainment on our handheld devices.

3. What are the challenges social robotics companies are yet to solve?

Social Robot companies need to convey value beyond a single or very small set of use cases. The benefit of real AI, especially in decentralized AI learning robots, is that the robot can be used for almost anything that benefits from observation, guidance, memory, instruction, physical assistance, and most importantly, reasoning and recall. For example, a KODA robotic dog may one day be acquired to aid a visually impaired child with all their day to day needs as an organic service animal may, but a KODA that included real learning through its decentralized AI systems will detect when the child is under distress, scared, or otherwise needs assistance.

Additionally, the robot will gather the data surrounding the child as it navigates in an environment from large cities to rural locations. If help is needed, the robot can communicate through a varying array of onboard devices to get assistance, information, two way or group communication, all while recording all that is happening, if desired. The challenge of conveying the value beyond the single-use case will be key.

4. Tell us about KODA.

KODA is a social robot. It’s designed to be functional from pragmatic and emotional perspectives. KODA’s blockchain-enabled decentralized AI infrastructure allows the robot dog to serve a multitude of purposes: from family companions to seeing-eye dogs, from an ever-vigilant guard dog to a powerful supercomputer capable of helping science solve some of its most complex problems, the learning power of KODA makes it future proof in both its evolution and application.

5. Is it true that KODA’s robotic dog can evolve from a puppy-like state to a robotic dog with the intelligence of a supercomputer? Can you explain how?

KODA’s potential is rooted in its ability to utilize its decentralized AI to learn and grow. The dogs are shipped with domain knowledge – a base level understanding. And when they are unboxed, they have the ability to add to this domain knowledge as they interact and observe the environment around their owner.

robotic dog company KODA

It’s not that different from how humans develop wisdom. It extends beyond just learning and figuring out how to do something – it’s understanding how to take those findings and putting them into practice.

The best example I can think of is that when you progress through your life, you are presented with challenges that you attempt to overcome or ignore. As you overcome challenges, you gain knowledge. You usually have to overcome similar challenges more than once, and typically, several times. Through ephemeral memory loss, it ensures you will develop multiple ways to come at a new challenge, with the most optimal solution learned over time. If you ignore a challenge, the challenge tends to be harder the next time you face it.

Eventually, as you tackle multiple challenges and gain knowledge on how to handle them with ease, you develop wisdom.

If a KODA learns how to climb a set of stairs, it may be beneficial for its instruction to employ ephemeral memory loss, so it will forget it learned how and will attempt it again and again, starting fresh to reason how to climb the stairs, while retaining the essence of having learned how to do things with different environmental conditions, interactions, and unaccounted for variables. It should employ different tactics with different attempts. If the user’s young son left his toy on the steps, it might learn something entirely new for just that session. It’s this level of learning that is the next major breakthrough in robotics, and I believe KODA is the ideal platform for it.

6. How is it different from other robotic dogs in the market?

There is no other Robotic dog available with the computing power, Decentralized AI, potential for real learning for unlimited use cases, and in my opinion, as cute as a KODA.

The post Interview with John Suit, advising CTO at robotic dog company KODA appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/interview-with-john-suit-advising-cto-at-robotic-dog-company-koda/feed/ 0
Interview with JetPlay’s CEO Tom Pigott about AI and game creation https://roboticsbiz.com/interview-with-jetplays-ceo-tom-pigott-about-ai-and-game-creation/ https://roboticsbiz.com/interview-with-jetplays-ceo-tom-pigott-about-ai-and-game-creation/#respond Wed, 03 Feb 2021 11:07:51 +0000 https://roboticsbiz.com/?p=4649 With the exponential growth of the game industry over the past decade (especially since the global pandemic) and the increased complexity of games, game development has reached a tipping point where it is no longer humanly possible to use only manual techniques to create games. Today, a large part of games are designed, built, and […]

The post Interview with JetPlay’s CEO Tom Pigott about AI and game creation appeared first on RoboticsBiz.

]]>
With the exponential growth of the game industry over the past decade (especially since the global pandemic) and the increased complexity of games, game development has reached a tipping point where it is no longer humanly possible to use only manual techniques to create games.

Today, a large part of games are designed, built, and tested automatically. Thanks to artificial intelligence (AI), researchers have delved into better AI techniques to support, assist, and even drive game development in recent years.

To further discuss AI’s role in game creation, we interviewed Tom Pigott, a seasoned tech entrepreneur and CEO of JetPlay, which recently released the open beta of its AI games creativity platform Ludo – the world’s first AI games ideation tool that can accelerate and democratize games creation.

Tom graduated from Stanford University in 1991. He worked in Japan and China for Caterpillar-Mitsubishi in the 1990s before returning to Seattle, just as the Internet was taking off. Tom started his first company, Soma.com, in 1997, which was the world’s first Internet pharmacy, and successfully sold it to CVS drugstores several years later.

Tom has been active in angel investing as well as venture capital. His passion is starting companies, though, and he created his next company, Candela Hotels, as a leading high-touch, high-tech luxury hotel concept. The 2007 financial crisis put the build-out of those hotels on the shelf. Tom went back to investing in technology start-ups, including VR gaming. Jet Play is his latest breakthrough company, which plans to empower the world’s game creators through the magic of AI.

You can read the complete interview below:

JetPlay’s CEO Tom Pigott

1. Traditionally, artificial intelligence was always left until the last stage in a game’s development cycle. Most programmers know that it is almost impossible to develop game AI until one knows how the game will be played. But today, it has become the central part of all advanced game design, be it creating advanced characters and their own independent, autonomous personalities that players interact with. Can you tell us how AI is revolutionizing the gaming industry today?

AI and machine learning are going to become more and more embedded in the design process for game creators. It will free up much of the drudgery in game creation and less time will need to be spent in brainstorming sessions, scanning social media and testing prototypes to come up with a hit game. The AI tools will provide all of the ideas at your fingertips and assist with initial concepting, thus allowing game creators to focus on the actual execution of the game design.

2. Players are more diverse, have access to games in more places and at more times, and produce more data and content for developers to leverage than ever before. What are the key challenges the game developers face today, and how AI plays a more significant role in delivering an engaging and real-time experience to the players?

Game developers face constant pressure and competition to come up with new winning game concepts or improved versions of existing ones. AI will significantly enhance and turbocharge the design process for creators by allowing them instant access to over a million games and the ability to blend these games together, creating something entirely new.

3. Tell us about your new AI platform Ludo.

Ludo, Latin for ‘I Play,’ is the world’s first AI games ideation tool that uses machine learning and natural language processing to develop game concepts 24 hours a day. Ludo is constantly learning and evolving and makes game concepting and ideation simple. Just by entering in keywords or sentences, Ludo accesses its database of close to a million games and rapidly returns written game concepts, artwork, and images that developers can rapidly work on to take the next stage (concept presentation, MVP, or accelerated soft launch).

Ludo will change game creation by enabling developers, arming them with unique game concepts within minutes of their request being processed. Furthermore, as Ludo’s powerful capabilities are within reach of any size of the studio, the creation process is being democratized.

4. The style of programming in a game is still very different from that in any other sphere of development. There is a focus on speed, but it isn’t very similar to programming for embedded or control applications. There is a focus on clever algorithms. Tell us how Ludo uses machine learning algorithms and natural language processing to develop game concepts 24 hours a day.

Ludo relies on Natural Language Processing (NLP) Transformer models to understand existing games and generate new games. Ludo’s models are trained on games found in different sources, of all possible genres, and a wide variety of platforms. This allows Ludo’s models to fundamentally understand what each game is about, how different game elements can be used to build a game, and produce new game descriptions leveraging those associations.

The game generations produced by Ludo tend to have a high degree of creativity and surprising combinations of game elements while still being perfectly readable and understandable, making Ludo an excellent brainstorming partner. And since there is random sampling involved in the generation processes, the content generated by Ludo will always be different and is virtually unlimited, even if the users give Ludo the same inputs.

Ludo combines a number of different NLP models to provide a rich experience in game generation, including:

  • Generation of games based on the genres and/or platforms specified by the user.
  • Generation of games that revolve around keywords given by the user.
  • Auto-completion of game descriptions, starting from one sentence or short text given by the user.
  • Generation of game features and game mechanics based on any given game or short game description.
  • Generation of new games that are similar to any given game.
  • Generation of new games by mixing and blending any number of other existing games.

By leveraging state-of-the-art Natural Language Understanding (NLU) models, and Computer Vision (CV) models, Ludo provides tools that enable the semantic search of games and game media, allowing for instance:

  • Finding games via a search query describing the game theme, features, and/or mechanics.
  • Finding games similar to any given game (existing or generated).
  • Finding game images via a search query.
  • Finding game images semantically similar to any given image (that is, images that have similar elements or similar meaning).
  • Finding images related to any given game

5. Where is the gaming industry headed? Can you tell us about the current trends in gaming?

Creativity is the new currency. The future will see us scaling our creativity through the power of AI, and in a few years, it will be hard to imagine game designers not using AI tools as a fundamental part of their design process. Digital worlds like versions of the Metaverse will be created by many different actors in the next five to ten years, and people will be spending more time socializing and interacting with digital friends in the future, a trend we have already seen developing during Covid-19.

The post Interview with JetPlay’s CEO Tom Pigott about AI and game creation appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/interview-with-jetplays-ceo-tom-pigott-about-ai-and-game-creation/feed/ 0