As Anne-Laure Fayard said, it's highly unlikely that you would see any major learning outcomes or mindset changes from a single workshop, although I still think it's worth measuring (if for no other reason than it will give you practice and insight into developing these types of measures). I would keep it very simple. Ask kids to write down or illustrate the scientific theory you'll be covering in the workshop before you begin and then give them the opportunity to do this again at the end and/or to write down how the workshop either helped to confirm--or inspire revisions to--their original thinking. You could then create a rubric to assess the different variables you want to learn about. (Here is an example of a simple rubric I created to analyze student engagement in a design thinking workshop: https://docs.google.com/document/d/1ELB2eWXx3MojT6Ou0RT9y3RY919QFZU5GQgmkB88VwU/edit?usp=sharing .)
If you also have the capacity to collect and analyze qualitative data, I think this would probably provide more insight into mindsets like "grit," self-efficacy/creative confidence, etc.) and into learning as well. And it could enable you to better understand what elements of the program are having the strongest impact and in what ways (content knowledge reinforcement, conceptual growth, mindset development, etc.). It might also provide a clearer picture of how the workshop impacts different types of student subgroups (for example, maybe some aspect of your approach works really well for extroverted students, but proves more challenging for introverts). Capturing students' specific language and conversations can be really powerful. I used simple digital voice recorders on the table to capture partner dialogue during a design thinking workshop I ran with 3rd, 4th, and 5th graders (and made sure the students understood that they were being recorded--this is important!!!) and it totally changed my perspective about what "on task" behavior can look like, and helped me recognize which elements of my workshop design were not resonating with students at this developmental stage. You could also come up with 2 or 3 interview questions you'd like to ask individual students/student groups at specific points while they are working (video recording these interviews, if possible). This would allow you to be systematic in your data collection and oriented toward whichever goal(s) you hope to measure, while still leaving room for the type of insight you can only gain from rich qualitative data.
Oh also, if you give me more specifics about what teachers'/schools' assessment needs and goals look like in this context, I can make some more recommendations related to point (iii). I've done a fair amount of work related to the creation of digital assessment and feedback tools, but I'd want to have a better understanding of teachers' current practices, their level of comfort with online platforms, etc.