Part 4: LSBot’s “Gotchas”

10 min readJan 22, 2025

In this series, we’ve explored several techniques for integrating LSBot into your study routine. We’ve focused heavily on the strengths of LSBot, but there are certainly weaknesses. Now, in the fourth and final article, we’ll explore some of LSBot’s weaknesses. We’re hopeful we can continue improving on these issues to create a smoother student experience, but it’s important to know what they are and how to approach them. It’s also a good time to emphasize that LSBot is a tool meant to be added to your learning toolkit, not as a replacement.

With that in mind, let’s look at three “gotchas” that you should avoid when integrating LSBot into your studies, as well as techniques for avoiding these pitfalls.

Before jumping in, if you haven’t had a chance yet to read parts 1–3, you can find them here:

1. Infinite loops of critiques

A common idea in education is that there’s always room for improvement. We usually agree with this! LSBot tries to be helpful by always giving you some feedback. This is very beneficial when you give LSBot an explanation, and your explanation isn’t quite right. Where students sometimes run into problems with LSBot (or other chatbots that you might’ve used) is when you give a really good answer. Our human TAs are good at responding with something like, “That’s perfect! No complaints.” LSBot, on the other hand, can have a hard time leaving it at that. This can occasionally result in an infinite loop where LSBot will give you a critique, you’ll implement this critique, LSBot will give you another critique, you’ll implement this critique, and so on. Sometimes, LSBot will even contradict previous critiques.

This cycle can leave students feeling uncertain about their approach to answering questions. It can also lead to overly verbose answers, as LSBot has suggested, including more and more information that gradually becomes less relevant.

The first step to avoiding this scenario is knowing it can happen. Once you know it’s possible, it’s easier to recognize it, decide, “Thanks, but no thanks,” and move on. If LSBot’s critiques start to feel off-topic, repetitive, or nit-picky, that’s likely an indication that your explanation or solution is top-notch.

Let’s look at an example. To demonstrate this scenario, we’ll need a relatively long example, so try to focus on how the answer changes and how LSBot responds rather than the details of the answer.

For the initial prompt, let’s provide LSBot with a question and our answer, asking it to critique our explanation. In this case, the answer to the question is an A+ — a virtually perfect answer. There are many ways to write an A+ answer, but this explanation is precise and accurate, covers the crucial points without going overboard, and is broken into logical chunks. A TA reviewing it would have no complaints:

Note that while we’re using Python for our example, this concept is very similar in both Ruby and JavaScript. Instead of “list,” imagine “array,” and you have an A+ answer for both the Ruby and JavaScript equivalent code snippets!

LSBot is helpful, and we did ask for a critique, so it spends some time complimenting our answer, but then finishes with some suggestions:

Alright, none of these suggestions are necessarily bad. We wouldn’t consider any of them necessary based on the given question, but they do demonstrate a deeper understanding. Let’s update our answer based on these suggestions. The changes are highlighted:

This is still a good answer! It’s hard to argue that it’s better, though. Talking about integers being immutable isn’t terribly relevant to this question. It doesn’t hurt, but it starts to get into the territory of “listing things we know to bulk up the answer.” If you’re short on time during an assessment, then we could argue that the first answer was better. Now, let’s look at LSBot’s response. It still has suggestions for us, which are highlighted:

The previous suggestions we looked at were arguable, but now we’re spiraling.

The first new suggestion is asking us to mention a totally different concept in our answer. When asked, “What concept is being demonstrated?” it’s essential to extrapolate the over-arching single concept. A lot is going on in even a simple code snippet like this one, so multiple concepts will always be at play. Variables as pointers is the key concept here. Mentioning other concepts, besides maybe mutability, as that is also a key here, isn’t going to improve our answer but rather make our assessor question if we really understand the key concept being demonstrated.

The second suggestion seems to be grasping at straws. This isn’t something we focus on during this course, and it’s really another rehashing of the explanation of variables as pointers.

The third suggestion could be relevant to another question, but this question hasn’t asked us to update any code or provide an alternative. If we had a snippet with both reassignment and mutation, this might be a nice idea, but as is, this feels like we’re going back to bulking up our answer just to fill the page.

We could alter this answer and keep going, and LSBot would likely keep giving us “improvements” to consider. At some point, it would probably even tell us that we could omit part of the explanation that it suggested adding.

How do you break out?

Okay, we know the problem, but what can we do to avoid it?

  1. Know it can happen
    The best weapon against spiraling critiques is knowing that it can happen. Trust your gut. If LSBot is telling you to add in some information that seems fishy, take it with a grain of salt. In general, LSBot is better at correcting your mistakes than it is about suggesting additional information. That explains why it was so easy to get LSBot to spiral when we started by giving it a perfect answer.
  2. Avoid cornering LSBot in your prompts
    To be fair to LSBot, we did explicitly tell it it to continue critiquing our answer. A more open ended prompt might’ve led to better results where LSBot would’ve been comfortable omitting further suggestions without feeling like it failed to what it was asked. Dedicated bugger, it is! Maybe next time we would change “critique” to “review.”
  3. Be wary of contradictions
    While we didn’t see it here, as it usually occurs in even longer threads, if LSBot starts telling you to omit something it previously told you to include, or some other contradictory suggestions, it’s likely time to scrap the thread and start fresh. In fact, this is good advice for any interactions with LSBot. When LSBot starts contradicting itself, something has gone awry. A new thread is a great first step to troubleshooting.

2. Semantic Sinking

If you’ve attended a live study session during your time at Launch School, there’s a good chance that you’ve heard a TA redirect a conversation that’s gone a bit too into the weeds. There are some questions where TAs might answer, “It doesn’t really matter,” or “Either way is totally fine.” LSBot finds these answers hard to give because, let’s be honest, they don’t always scratch that itch! When you start asking about incredibly precise language usage, you can sink in semantics, and LSBot will go down with you. It’s like asking someone to explain the difference between the colors lavender and periwinkle. Maybe someone has an answer, but it’d be hard to argue, and there’s not much benefit in trying to distinguish between the two.

Let’s look at an example. Imagine that we’re concerned about the language to use when discussing assignment:

We ask LSBot to choose one, and it does:

We can follow up to make sure we understand LSBot suggestion:

This is incorrect. The true answer to this question is that either is acceptable, so long as you’re consistent. In fact, since we think of variables as pointers, it’s more common among Launch School students and the curriculum to say that we assign a variable to a value, which LSBot has just told us is incorrect.

If you remember our example from earlier, where we explained the usage of pop on a list being referenced by two different variables, LSBot critiqued us several times without ever complaining about our language about variables pointing to values.

How to stay afloat

Our first two tips for avoiding semantic sinking are similar to avoiding spiraling critiques:

  1. Know it can happen
    When LSBot seems a bit nit-picky, trust your gut and start a new thread.
  2. Avoid “this or that” prompts
    When we ask LSBot to do something, it tries very hard to do so. That means when we say “choose A or B,” it will most likely choose. Not always — you’ll see LSBot avoid this pitfall often — but it does happen, and saying things like “Is there a preference between…” instead of “Which is better…” will help us avoid this pitfall.
  3. Use LSBot to check LSBot
    When this type of mistake happens, it tends to spoil the rest of the current thread. If things seem fishy, start a new thread, rephrase your question, and see what happens. Or, tell LSBot what you think. Find a counter-example and see if it will change its stance.
  4. Trust the Launch School Material
    When there’s something incredibly important to get right, like the difference between the boolean True and a “truthy” value, the Launch School material will remind you several times. If LSBot is telling you that something is very important and you find no mention of it in the curriculum, you’re likely being led astray.

3. Assessment Guidance

The final problem we’ll discuss can happen when students gauge the difficulty of an assessment based on what LSBot has provided them. Content generation is a very popular use case for LSBot, and it provides a lot of good practice problems. The problem is that LSBot doesn’t actually know what’s on the assessments.

This isn’t so much of a wrongdoing on LSBot’s part, but rather a mistake in expectations. LSBot has a lot of Launch School materials to work with. However, to avoid LSBot leaking assessment questions, it has never been provided with the actual assessments. It has study guides and all of the information in the assessment courses, so it can do a nice job of creating mock assessments. Still, it’s up to students to read study guides and use their experience to determine if questions are appropriate representations of assessment difficulty.

Let’s look at an example, imagining we’d like to prepare for the RB129 written assessment:

These are actually very good conceptual questions. However, If we read the RB129 written assessment information, we can tell that a few things are missing or incorrect.

  1. The assessment has 15–20 questions.
  2. The code spike includes a list of requirements.
  3. You’ll need to be able to identify concepts in existing code snippets.

If we went into the actual assessment based on this mock assessment provided by LSBot, we’d be in for a rude awakening.

Let’s update our prompt:

This will give us a much more accurate representation of what’s on the assessment.

While this example showed us a mock assessment that was much easier than the actual assessment would be, this can also go the other way around. If you ask LSBot for practice questions and they seem much more difficult than anything you worked on in the curriculum, chances are they are too difficult.

How to gauge assessment content generation

There are a few important steps you can take to ensure you aren’t caught off guard on assessment day when using LSBot to help you prepare:

  1. Read all study guide material carefully — more than once
    There’s a lot to keep track of when preparing for an assessment. You should take this information in in the same way that you study the Launch School content. Take notes and review the study guide and other relevant assignments, like “Assessment Format” assignments carefully.
  2. Use Launch School material and practice problems as a source of truth.
    If it takes you 45 minutes to solve the Launch School-provided practice problem for a 119 assessment but only 20 minutes for an LSBot-provided problem, there’s some disconnect happening, and it’s not in your favor. Use the Launch School material as your benchmark, and if necessary, use it to help give LSBot a benchmark.
  3. De-isolate your studies
    As always, we recommend meeting up with peers and joining study sessions. Sign up for weekly SPOT sessions where you can gauge your readiness with others. TA-led study sessions are also a great chance to get insight and feedback from TAs who grade assessments.

As we conclude this discussion on LSBot’s purpose, features, potential, and limitations, it’s clear that the tool offers a valuable contribution to modern learning. With its ability to provide immediate feedback and simplify complex topics, LSBot serves as an effective resource to support your studies.

By incorporating LSBot into your routine, you gain a flexible and accessible resource that supports you at any stage of your learning journey. It’s a tool designed to complement — not replace — your studies, providing clarity and reinforcing understanding while encouraging curiosity and independent exploration.

Know that we continue to work hard behind the scenes to improve LSBot to make education more accessible and student-focused. We hope you’ll take advantage of LSBot, and we encourage you to share your ideas and feedback to help shape its continued development.

Happy studying!

--

--

Launch School
Launch School

Written by Launch School

The slow path for studious beginners to a career in software development.

No responses yet