This episode of Hidden Brain discusses the impact of errors, using the example of a minor typo causing the downfall of an engineering company. The consequences of errors, ranging from trivial to catastrophic, are explored, along with the idea that demanding perfection may be a mistake in itself. The episode features Amy Edmondson, a Harvard Business School scholar, who shares her research on medical mistakes and team effectiveness. Her findings reveal a counterintuitive correlation between high-performing teams and higher reported error rates, leading to the insight about the importance of psychological safety in teams. The episode concludes with a discussion on the need for an open interpersonal climate for learning from mistakes, and a shift in perspective to see errors as opportunities for learning rather than disasters.
How does it apply to you?
The learnings from this episode can be applied in various real-world scenarios, from improving team dynamics in a corporate setting to enhancing safety measures in high-stakes environments like hospitals and airlines. Understanding the importance of admitting and learning from mistakes can foster a culture of transparency and continuous improvement.
Applied Learning to Developer Enablement
The episode of Hidden Brain and Amy Edmondson's research can be applied to software development and learning in a software development organization in several ways.
Emphasizing Psychological Safety
Edmondson's discovery about the importance of psychological safety in teams can be essential in a software development environment. By creating a safe space where team members can admit and learn from their mistakes without fear of punishment, teams can improve their performance and create better software.
Learning from Mistakes
Edmondson's reframing of errors and failures as opportunities for learning rather than signals of disaster or dysfunction can be applied to software development. It's common for developers to make mistakes, but these should be seen as opportunities to learn and grow rather than reasons for punishment.
Setting Up Systems to Catch and Correct Mistakes
Edmondson's suggestion to set up systems to catch and correct mistakes before they cause harm is applicable to software development. Implementing practices such as code reviews, automated testing, and continuous integration can help catch errors early and prevent them from causing major issues.
Context-Dependent Attitudes Towards Failure
In software development, attitudes towards failure can be context-dependent, similar to Edmondson's findings. For example, in a research and development environment, a 'fail fast, fail early' approach can be beneficial. However, in a production environment where errors can have significant consequences, a more cautious approach is needed.
Developer Checklist
Understand the Impact of Errors: Acknowledge that mistakes can have varying consequences, some trivial and some catastrophic. Use this understanding to create a culture that allows for mistakes and learns from them rather than punishing them.
Research on Errors: Consider conducting research on mistakes and failures in your field. This could include analyzing previous errors, their causes, and effects to better understand how to prevent them in the future.
Measure Team Effectiveness: Use tools such as the Team Diagnostic Survey to assess team effectiveness. This can help identify areas of strength and weakness within the team.
Promote Psychological Safety: Encourage a safe environment where team members can admit and learn from their mistakes. High-performing teams often report more mistakes due to this factor.
Re-evaluate Perception of Mistakes: Shift the perspective of viewing errors as disasters to viewing them as opportunities for learning and growth. This shift can help foster a more positive and productive work environment.
Understand the Consequences of Mistakes: Recognize that the consequences of mistakes can vary greatly in severity. Use this understanding to create appropriate responses and solutions.
Rethink Failure: Rather than trying to eliminate all errors, set up systems to catch and correct mistakes before they cause harm. This approach promotes learning and improvement.
Adapt Attitudes Towards Failure: Understand that attitudes towards failure are context-dependent. While 'fail fast, fail early' may work in some settings, it may not be appropriate in others where the stakes for error are high.
Embrace Intelligent Failures: Acknowledge and learn from mistakes made during the exploration of new territories or application of new methods. These failures are opportunities for learning and improvement.
Understand Complex Failures: Recognize that complex failures often result from a combination of minor issues. Develop an understanding of how these minor issues can align to create significant problems.
Encourage Speaking Up: Foster an environment where team members feel comfortable voicing their concerns about potential issues. This can help catch and correct issues early, preventing complex failures.
Implement Proactive Error Management: Adopt proactive error management systems, such as Toyota's 'Andon cord', to encourage early detection and correction of errors.
Understand the Value of Quality over Cost: Recognize that investing time and resources in identifying and rectifying small problems can lead to higher quality output and prevent complex failures in the long run.
Beware of Overconfidence: Stay vigilant and mindful even when carrying out familiar tasks, as overconfidence can lead to basic errors and failures.
Use Checklists Mindfully: Implement the use of checklists to prevent basic errors, but ensure they are used mindfully and not just as a formality.
Promote Intelligent Failure: Encourage a culture where intelligent failures are viewed as learning opportunities rather than stigmatized. This can lead to fewer basic and complex failures.
Understand Intelligent Failures: Learn to differentiate between various types of mistakes and embrace the concept of 'intelligent failures' as they provide valuable information for future attempts.
Embrace Failure as Part of the Process: Recognize that failure is a part of the scientific process and each failed hypothesis provides crucial information about what doesn't work, enabling the team to adjust their approach and try again.
Approach Failure as a Process of Discovery: See failure as an opportunity for exploration and discovery, akin to finding the door handle in the dark.
Apply Intelligent Failure in Different Contexts: Apply the concept of intelligent failure in various situations, such as blind dating, where each failed date can inform future decisions.
Minimize the Scope of Failure: Limit the investment in an uncertain outcome to gather necessary information for future decisions. This can be done by training in simulators before applying skills in real-life situations.
Practice Intelligent Failure in R&D: Conduct experiments with the understanding that many may not work out as expected. These failures are part of the process of discovery and innovation.
Learn from Successful Practitioners of Intelligent Failure: Draw inspiration from successful examples of intelligent failure, such as Thomas Edison, who viewed his unsuccessful experiments as ways that didn't work.
Do Your Homework: Prepare thoroughly before embarking on a new venture or experiment. Use all available resources and knowledge to understand the context and gather as much information as possible.
Understand Characteristics of Intelligent Failures: Recognize that intelligent failures involve calculated risk-taking, hard work, resilience, and viewing failure as a necessary part of progress and learning.
Reframe Setbacks and Failures: Learn to see setbacks as necessary and part of being human, and appreciate their role in accomplishing great things.
Seek Out Intelligent Failures: Embrace a framework that views failures as opportunities for learning and seek out intelligent failures more often.
Embrace Failures: View failures as learning opportunities and welcome them instead of avoiding them.
Identify Learning Objectives from Real-life Stories: Use real-life examples, like Susan Prescott's story, to identify potential learning objectives such as overcoming fear of public speaking.
Leverage Personal Experiences for Learning: Understand and leverage personal experiences and challenges as a learning tool in software development.
Apply Learning to Real-world Scenarios: Apply the learning from personal experiences to real-world scenarios, such as transitioning into roles that require the learned skills.
Share Knowledge and Learnings: Encourage sharing of knowledge and experiences to enhance collective learning.
Continuous Learning and Improvement: Always strive for continuous learning and improvement in your software development journey, just like the continuous improvements in public speaking.
Key Points
The Consequences of Errors - The host explores the idea that errors have varying consequences Some are trivial, and others can have catastrophic effects The story of Taylor and Sons serves as an example of a minor error leading to a major disaster
Amy Edmondson's Research on Medical Mistakes - Amy Edmondson has conducted research on mistakes and failures, specifically in the medical field The aim of her research was to understand the causes of these errors and their potential for harm
Understanding Team Effectiveness - Edmondson discusses her methods for measuring team effectiveness, including assessing factors such as interpersonal relationships, performance, resource availability, leadership quality, and job satisfaction
Correlation Between Team Quality and Mistakes - Contrary to her initial hypothesis that better teamwork would lead to fewer mistakes, Edmondson found that high-performing teams were reporting more mistakes This led to her insight about the importance of psychological safety in teams
Introduction to the Study and Initial Expectations - Edmondson conducted a study on the correlation between team coordination quality and the incidence of adverse events or mistakes in a healthcare setting The premise was that better teams would have fewer adverse events
Unexpected Findings - The data showed a correlation between good teamwork and high error rates, contrary to her initial hypothesis This led to the question of whether better teams actually make more mistakes, or if they are more willing and able to report them
Exploring the Possibility of Greater Error Reporting - Edmondson found a correlation between the team's climate and the willingness to report errors When respondents agreed that making a mistake would not be held against them, error rates were higher
Field Observations - Significant differences were found in the work environments of the units In open units, staff members felt comfortable discussing mistakes, while in authoritarian units, staff members feared the consequences of admitting mistakes
Reframing the Perception of Mistakes - Edmondson concluded that unreported mistakes cannot be learned from, emphasizing the importance of an open interpersonal climate for learning from mistakes and failures
Understanding the Consequences of Mistakes - The consequences of mistakes can vary greatly However, the common thread is that mistakes are generally seen as costly, unpleasant, and dangerous Edmondson's work aims to shift this perspective and emphasize the importance of reporting and learning from mistakes
Rethinking Failure - Edmondson argues that it is a mistake to lump all errors into the same category and attempt to completely eliminate all failure Instead, she suggests setting up systems to catch and correct mistakes before they cause harm
Context-Dependent Attitudes Towards Failure - Edmondson explains that the 'fail fast, fail early' mantra, which encourages rapid experimentation and learning from mistakes, is not universally applicable The appropriateness of this approach depends on the context and the stakes for error
Intelligent Failures - These are undesired results of forays into new territory driven by a hypothesis They are essentially experiments that didn't produce the expected result and often occur on the frontiers of knowledge or discovery Intelligent failures can occur when embarking on a new personal endeavor or hobby
Complex Failures: The Case of the Columbia Space Shuttle - The 2003 Columbia space shuttle disaster is an example of a 'complex failure' The shuttle broke apart upon reentry into Earth's atmosphere due to a foam strike, which caused a hole in the shuttle leading to combustion upon reentry Complex failures occur in complex tasks on the frontiers of human knowledge and can have disastrous consequences
Understanding of Complex Failures - Complex failures are often the result of numerous small factors aligning in a 'perfect storm' It's not typically one major mistake, but a series of minor issues that align to create a significant problem
Importance of Speaking Up and Systemic View of Problems - To prevent complex failures, it's essential to speak up about potential issues and take a systemic view of problems rather than looking for a single cause Multiple factors can contribute to a failure and encouraging people to voice their concerns can help catch and correct issues early
Proactive Error Management: Toyota - Toyota uses a system called the 'Andon cord', which any team member can pull if they see something wrong This system encourages early detection and correction of errors, preventing small issues from escalating into significant problems
Understanding the Process of Stopping an Assembly Line - Pulling the cord to stop an assembly line doesn't halt operations instantly Instead, a team leader arrives to discuss the potential issue Most of the time, the issue can be resolved or it turns out not to be a real problem, so the line continues However, about one out of 12 times, there is a genuine issue When this happens, the line stops and won't restart until the problem is resolved
Cost vs Quality in Production - Halting an assembly line can be costly, as every minute of downtime equates to a potential lost car sale However, such stops are considered investments rather than costs The ability to identify and rectify small problems along the way reduces the risk of producing imperfect cars, leading to a higher quality production
Types of Failures and a Sailing Incident - Failures can be classified into intelligent, complex and basic categories Basic failures can occur due to lack of vigilance or overconfidence, even in familiar situations
The Danger of Overconfidence - Situations where individuals feel they can perform tasks 'in their sleep' are often ripe for basic failures Overconfidence can lead to inattention and mistakes Hence, it's crucial to remain vigilant and mindful, even when carrying out familiar tasks
The Role of Checklists in Preventing Basic Errors - Checklists can be an effective tool in preventing basic errors However, simply having a checklist is not enough It must be used mindfully A crucial step can be overlooked due to habitual responses, leading to a catastrophic failure
Embracing Intelligent Failure - In a world where failure is often stigmatized, it's important for organizations to identify places where they can fail intelligently This involves acknowledging and learning from mistakes, which can lead to fewer basic and complex failures
The Concept of Intelligent Failures - Intelligent failures are those mistakes that provide valuable information for future attempts These occur in new and uncharted territory, where hypotheses are tested and often proven wrong, but they can be useful for learning and growth
Failure in the Context of Scientific Research - Failure is an integral part of the scientific process Failed hypotheses provide crucial information about what doesn't work, enabling adjustments in approach and renewed attempts
Failure as a Process of Discovery - Failure is likened to groping in the dark until finding the door handle, a process of discovery This analogy can be applied to various life situations, like finding a life partner, innovating at work, or experimenting in the kitchen
Intelligent Failure in the Context of Blind Dates - Blind dates, being inherently unpredictable, can serve as examples of intelligent failure Each failed date provides valuable information that can inform future decisions
Real-life Example of Intelligent Failure - A personal story about dating experiences illustrates intelligent failures After a failed date, the subject was more cautious with the next date, reducing potential loss and resulting in a successful match
Minimizing the Scope of Failure - Limiting the investment in an uncertain outcome can minimize the scope of potential failure This strategy can gather necessary information for future decisions
Concept of Intelligent Failure in R&D - Intelligent failure in research and development involves conducting experiments knowing that many may not work out as expected These 'failures' are part of the process of discovery and innovation
Thomas Edison and Intelligent Failure - Thomas Edison exemplified intelligent failure He conducted thousands of experiments that didn't yield the desired results, but he viewed these as ways that didn't work, demonstrating the essence of intelligent failure
The Importance of Doing Homework - 'Doing your homework' is crucial in intelligent failures It means using all available resources and knowledge to prepare before embarking on a new venture or experiment
Example of Intelligent Failure in Scientific Research - A team's quest to separate the strands of RNA demonstrated intelligent failure After numerous unsuccessful attempts, they discovered a reagent that led to a successful experiment
Characteristics of Intelligent Failures - Intelligent failures involve calculated risk-taking, hard work towards success, understanding that there will be setbacks, and resilience in the face of these setbacks
Reframing Setbacks and Failures - Practicing intelligent failure involves reframing setbacks as necessary and part of being human This reframing helps in building resilience and character
Seeking Out Intelligent Failures - Companies and individuals should seek out intelligent failures more often Embracing a framework that views failures as opportunities for learning can lead to productive and thoughtful approaches to new experiences
Embracing Failures - Failures should be viewed as learning opportunities and thus, should be welcomed instead of being avoided
The story of 'Unsung Hero' - The story revolves around Susan Prescott's twelfth-grade English teacher, Fred DeMayo, who helped her conquer her fear of public speaking due to her mild stutter This pivotal incident led Susan to become a corporate trainer, a role that heavily involves public speaking
Closing Remarks - The episode ends with an encouragement to share the episode with others who might find it interesting A New Year wish is also extended to the listeners
FAQs
What error led to the downfall of a 134-year-old engineering company, Taylor and Sons? The downfall of Taylor and Sons was caused by a typographical mistake made by a government agency. The agency miscommunicated about the company going into liquidation, which was intended for a different business named 'Taylor and Son'.
What is Amy Edmondson's research about? Amy Edmondson's research focuses on mistakes and failures, specifically in the medical field. She aims to understand the causes of these errors and their potential for harm.
What is the Team Diagnostic Survey? The Team Diagnostic Survey is a tool used by Amy Edmondson to measure team effectiveness. It assesses several factors such as interpersonal relationships, self-assessed performance, resource availability, leadership quality, and job satisfaction.
What was Amy Edmondson's initial hypothesis regarding team quality and mistakes? Amy Edmondson initially hypothesized that better teamwork would lead to fewer mistakes.
What were the unexpected findings in Amy Edmondson's study? Contrary to her initial hypothesis, Amy Edmondson found that high-performing teams were reporting more mistakes, not less. This led her to conclude that psychological safety in teams allows members to admit and learn from their mistakes without fear of punishment.
What does Amy Edmondson suggest regarding the perception of mistakes? Amy Edmondson suggests reframing the perception of mistakes and failures. She emphasizes that unreported mistakes cannot be learned from, and therefore promotes an open interpersonal climate for learning from mistakes and failures.
What is Amy Edmondson's view on trying to eliminate all failure? Amy Edmondson argues that it is a mistake to attempt to completely eliminate all failure. She explains that humans are fallible and will inevitably make mistakes. Instead, she suggests setting up systems to catch and correct mistakes before they cause harm.
Is the 'fail fast, fail early' mantra universally applicable? According to Amy Edmondson, the 'fail fast, fail early' mantra, which encourages rapid experimentation and learning from mistakes, is not universally applicable. While this approach may be suitable for a laboratory or R&D group, it would be inappropriate in contexts where the stakes for error are high.
What is the concept of 'intelligent failures'? Intelligent failures are undesired results of forays into new territory driven by a hypothesis. These are essentially experiments that didn't produce the expected result, and they often occur on the frontiers of knowledge or discovery. They can also occur when an individual is embarking on a new personal endeavor or hobby.
What is a 'complex failure'? Complex failures occur in complex tasks on the frontiers of human knowledge and discovery and can have disastrous consequences. They are often the result of numerous small factors aligning in a 'perfect storm'. It's not typically one major mistake, but a series of minor issues that align to create a significant problem.
What is the 'Swiss cheese model' of failure? The 'Swiss cheese model' of failure is a concept that suggests complex failures are often the result of numerous small factors aligning in a 'perfect storm'. It's not typically one major mistake, but a series of minor issues that align to create a significant problem.
What is the importance of speaking up and taking a systemic view of problems? To prevent complex failures, it's essential to speak up about potential issues and take a systemic view of problems rather than looking for a single cause. Recognizing that multiple factors can contribute to a failure and encouraging people to voice their concerns can help catch and correct issues early, preventing failures.
What is 'proactive error management'? Proactive error management involves early detection and correction of errors to prevent small issues from escalating into significant problems. An example of this is Toyota's 'Andon cord' system, which any team member can pull if they see something wrong.
What is the process of stopping an assembly line? Pulling the cord to stop an assembly line doesn't halt operations instantly. Instead, a team leader arrives to discuss the potential issue with the person who pulled the cord. Together, they diagnose the problem. Most of the time, the issue can be resolved or it turns out not to be a real problem, so the line continues. However, about one out of 12 times, there is a genuine issue. When this happens, the line stops and won't restart until the problem is resolved.
What is the role of checklists in preventing basic errors? Checklists can be an effective tool in preventing basic errors. However, simply having a checklist is not enough. It must be used mindfully. The tragic crash of Air Florida Flight 90 serves as an example, where despite using a checklist, a crucial step was overlooked due to habitual responses, leading to a catastrophic failure.
How should organizations approach intelligent failure? In a world where failure is often stigmatized, it's important for organizations to identify places where they can fail intelligently. This involves acknowledging and learning from mistakes, which can lead to fewer basic and complex failures.
What is the concept of 'intelligent failures'? Intelligent failures occur in new and uncharted territory, where hypotheses are tested and often proven wrong. These failures, however, provide valuable information for future attempts.
How does failure play a role in scientific research? Failure is a part of the scientific process. Each failed hypothesis provides crucial information about what doesn't work, enabling the team to adjust their approach and try again.
How is failure a process of discovery? Failure is compared to groping in the dark until one finds the door handle. This analogy can be applied to various life situations such as finding a life partner, innovating at work, or experimenting in the kitchen.
What is an example of intelligent failure in dating? Blind dates are an example of intelligent failure. Each failed date provides valuable information that can inform future decisions.
How can the scope of potential failure be minimized? The scope of potential failure can be minimized by limiting the investment (time, resources, etc.) in an uncertain outcome to gather necessary information for future decisions. An example is training pilots in simulators before allowing them to fly actual planes.
How did Thomas Edison exemplify intelligent failure? Thomas Edison conducted thousands of experiments that did not yield the desired results. However, he did not view these as failures, but as ways that didn't work.
What is the importance of 'doing your homework' in the context of intelligent failures? 'Doing your homework' involves using all available resources and knowledge to prepare before embarking on a new venture or experiment. It involves staying up-to-date with the latest research, understanding the context, and gathering as much information as possible before proceeding.
What are the characteristics of intelligent failures? Intelligent failures are about calculated risk-taking with an aim to minimize potential damage. It's about working hard towards success, understanding that there will be setbacks, and being resilient in the face of these setbacks. It's not about avoiding failure, but about viewing it as a necessary part of progress and learning.
Why should companies and individuals seek out intelligent failures? Embracing a framework that views failures as opportunities for learning can lead to productive and thoughtful approaches to new experiences.
Why is it important to embrace failures? Failures should be viewed as learning opportunities and thus, should be welcomed instead of being avoided.
How did Fred DeMayo help Susan Prescott? Fred DeMayo helped Susan Prescott overcome her fear of public speaking due to her mild stutter.
Glossary
Adverse Drug Events: Medical mistakes involving incorrect drug usage or dosage, potentially causing harm to patients.
Context-Dependent Attitudes Towards Failure: The idea that the acceptability and consequences of failure vary based on the specific context or environment.
Correlation Between Team Quality and Mistakes: The relationship between the effectiveness of a team and the frequency of mistakes made within that team.
Fail Fast, Fail Early: A mantra popular among tech entrepreneurs encouraging rapid experimentation and learning from mistakes.
Field Observations: Direct observation of a team or environment to gather data or insights.
Psychological Safety: A state where team members feel safe to take risks and be vulnerable in front of each other.
Rethinking Failure: A shift in perspective where failure is seen as an opportunity for learning and growth, rather than a disaster.
Team Diagnostic Survey: A tool used to assess various factors affecting team effectiveness, such as interpersonal relationships, leadership quality, and job satisfaction.
Understanding the Consequences of Mistakes: The process of evaluating and learning from the outcomes of errors.
Unexpected Findings: Research results that contradict the initial hypothesis or expectations.
Intelligent Failures: Undesired results of forays into new territory driven by a hypothesis. These are essentially experiments that didn't produce the expected result, often occurring on the frontiers of knowledge or discovery.
Complex Failures: Failures that occur in complex tasks on the frontiers of human knowledge and discovery, often resulting from numerous small factors aligning in a 'perfect storm'.
Swiss cheese model: A concept that describes how failures are often the result of numerous small factors aligning, not typically one major mistake, but a series of minor issues that create a significant problem.
Proactive Error Management: An approach to prevent complex failures by encouraging early detection and correction of errors, preventing small issues from escalating into significant problems.
Andon cord: A system used by Toyota where any team member can signal a potential issue. It encourages early detection and correction of errors.
Basic Failures: Failures that can occur due to lack of vigilance or overconfidence, even in familiar situations.
Checklists: Tools that can be effective in preventing basic errors. However, they must be used mindfully to be effective.
Embracing Intelligent Failure: The practice of identifying places where organizations can fail intelligently, acknowledging and learning from mistakes, which can lead to fewer basic and complex failures.
The Concept of Intelligent Failures: A concept that argues that not all mistakes are the same and that some failures can be useful for learning and growth. These occur in new and uncharted territory, where hypotheses are tested and often proven wrong. However, these failures provide valuable information for future attempts.
Failure in the Context of Scientific Research: The idea that failure is part of the scientific process. Each failed hypothesis provides crucial information about what doesn't work, enabling the team to adjust their approach and try again.
Failure as a Process of Discovery: The idea of failure being compared to groping in the dark until one finds the door handle. This analogy can be applied to various life situations such as finding a life partner, innovating at work, or experimenting in the kitchen.
Intelligent Failure in the Context of Blind Dates: The unpredictability of blind dates can lead to failure, however, each failed date provides valuable information that can inform future decisions, thus making it an intelligent failure.
Real-life Example of Intelligent Failure: A personal story illustrating intelligent failures. After a failed date arranged by a friend, the individual was more cautious with the next date, limiting it to a drink rather than a whole weekend. This minimal investment reduced the potential loss and resulted in a successful match.
Minimizing the Scope of Failure: The importance of limiting the investment (time, resources, etc.) in an uncertain outcome to gather necessary information for future decisions.
Concept of Intelligent Failure in R&D: The practice of conducting experiments with the understanding that many may not work out as expected. These 'failures' are not seen negatively, but rather as part of the process of discovery and innovation.
Thomas Edison and Intelligent Failure: The perspective of viewing unsuccessful experiments not as failures, but as ways that didn't work. It demonstrates the essence of intelligent failure - having a clear goal, using existing knowledge, and moving forward through trial and error without causing harm or major setbacks.
The Importance of Doing Homework: Using all available resources and knowledge to prepare before embarking on a new venture or experiment. It involves staying up-to-date with the latest research, understanding the context, and gathering as much information as possible before proceeding.
Example of Intelligent Failure in Scientific Research: After numerous unsuccessful attempts using various reagents to separate the strands of RNA, a successful experiment was achieved by discovering a paper from the 1960s discussing a reagent called Glyoxal.
Characteristics of Intelligent Failures: Intelligent failures are about calculated risk-taking with an aim to minimize potential damage. It's about being willing to work hard towards success, understanding that there will be setbacks, and being resilient in the face of these setbacks.
Reframing Setbacks and Failures: Learning to view setbacks as necessary and part of being human. It's about appreciating their necessity, especially when trying to accomplish great things. This reframing helps in building resilience and character.
Seeking Out Intelligent Failures: The idea that both companies and individuals should seek out intelligent failures more often. Embracing a framework that views failures as opportunities for learning can lead to productive and thoughtful approaches to new experiences.
Embracing Failures: The importance of viewing failures as learning opportunities and thus, should be welcomed instead of being avoided.
Hidden Brain Team: The production team behind the Hidden Brain podcast. The team includes various members with Tara Boyle as the Executive Producer.