<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Feedback on The Dangling Pointer</title><link>https://aaron.blog/tags/feedback/</link><description>Recent content in Feedback on The Dangling Pointer</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Fri, 23 Jul 2021 18:16:58 +0000</lastBuildDate><atom:link href="https://aaron.blog/tags/feedback/index.xml" rel="self" type="application/rss+xml"/><item><title>A short analogy on Feedback &amp; Unit Tests</title><link>https://aaron.blog/a-short-analogy-on-feedback-unit-tests/</link><pubDate>Fri, 23 Jul 2021 18:16:58 +0000</pubDate><guid>https://aaron.blog/a-short-analogy-on-feedback-unit-tests/</guid><description>&lt;p&gt;Unit tests are something that engineers write to test the work they've done in smaller pieces. Code that is tested tends to perform closer to expectations. Future changes to old code protect the way things work by causing unit tests to fail if something is changed unexpectedly. Passing tests are green checks ✅. Failing unit tests are red Xs ❌.&lt;/p&gt;&lt;p&gt;Default behavior is to write your unit tests after you're done writing the solution. When an engineer sees all ✅, they call it a day and ship it. The funny thing with unit tests are ... &lt;strong&gt;they are also subject to being full of problematic logic or buggy code.&lt;/strong&gt; How does the engineer know their tests are correct or cover all the scenarios if you've never seen a failure?&lt;/p&gt;</description></item></channel></rss>