Tuesday, April 17, 2018

קץ הבדיקות הידניות

Manual Testing - Finally dead

...
כן, שוב. 
עוד מאמר עם הכותרת הלעוסה הזו. ולא, זה לא מה שאתם מניחים. 
כמו שאפשר היה לשים לב כאן וכאן, אני מאוד לא מחבב את המונח "בדיקות ידניות". למרבה הצער, סתם לומר "אל תשתמשו במילה הזו" לא באמת פועל. אם נרצה ואם לא, המילים בהן השתמשנו עד כה יצרו תבניות מחשבה ודפוסי פעולה והיום יש אנשים שבודקים תוכנה על ידי כתיבת קוד באופן כמעט בלעדי, וכאלה שבודקים תוכנה ואינם כותבים קוד בכלל - בניגוד למה שטוענים בולטון ובאך, זה לא באמת אפקטיבי לקרוא לשני הדברים הללו באותו שם ("בדיקות"), כי אנשים תופסים את הפעילויות הללו כנפרדות. גם השימוש ב"מבחנים" (checks) לעומת "בדיקות" (testing) לא באמת תופס, כי אנשים שאינם מתחום בדיקות התוכנה רגילים להתייחס לכתיבת קוד כאל "כתיבת בדיקות", זאת נוסף על העובדה שתקלה במיתוג גרמה לכך שיותר מדי אנשים משתמשים בהפרדה הזו כדי לומר ש"מבחנים הם לא בדיקות" ולצמצם את החשיבות שלהם - שוב, למרות תיקונים חוזרים ונשנים מצד באך ובולטון (חמור יותר - התרגום של ההפרדה הזו לא עובד היטב בעברית, כי "לבדוק" זו מילה מקיפה פחות מאשר "לבחון" - אלא אם למישהו יש הצעות אינטואיטיביות יותר). 
בקיצור - חסרה לי דרך להפריד בין שני סוגי הפעילות באופן שמצד אחד ישתמש במילה "בדיקות", כי לזה אנשים רגילים ומצד שני לא יוצר הבדל הירארכי בין השניים. התואר "ידני" נתפס כמיושן ונחות ביחס ל"אוטומטי", והתואר "חקרני" (exploratory) הוא שקר גס - אני חוקר לא פחות כאשר אני כותב קוד מבדקים, ואפילו כשאני מנתח את תוצאות הריצה. בנוסף, זה שקר לא אפקטיבי - אנשים לא מבינים באופן אינטואיטיבי את כל הקטע הזה של בדיקות תוכנה כמסע גילוי, והתואר הזה לא אומר להם כלום. בנוסף, זה נשמע קצת כמו מונח פוליטיקלי-קורקט ל"ידני".  בקיצור, נדרש תואר שיכול לעמוד מול "אוטומטי" כשווה בלי ליצור מצג שווא ובלי שהוא יישמע כמו מסכה דקה ל"מה שכולם באמת חושבים". 
לאחרונה, מסתובב לי בראש רעיון - מה לגבי "בדיקות אינטראקטיביות"?
היתרון המרכזי במונח הזה, מבחינתי, הוא שאין צורך להסביר אותו - "אינטראקטיבי" הוא תואר שמייצג "מערב פעילות אנושית". בנוסף, המונח הזה כבר טעון באופן חיובי, והוא כמעט תמיד מופיע כתואר שמייצג יתרון (למשל, האם שמעתם על "למידה אינטראקטיבית"?).  המעורבות האנושית שמתייחסים אליה במערכות אינטראקטיביות היא רצויה, ובמקרים רבים היא אפילו המטרה. 
לכן, כבודק תוכנה אני נעזר בבדיקות - חלק מהן יהיו אוטומטיות, וחלק יהיו אינטראקטיביות. לא צריך להסביר אף אחד מהמונחים האלה, ולדעתי - גם לא צריך להגן עליהם. יש מקומות בהם אוטומציה חשובה לנו, ויש מקומות בהם נדרשת אינטראקציה. 
הנקודה היחידה בה עדיין קשה לי היא במעבר בין סוג הבדיקה לסוג הבודק (אין "בודקים אינטראקטיביים", בדיוק כמו שאין "בודקים ידניים" או "בודקים אוטומטיים"), אבל אני חושב שלכל הפחות, הבחירה ב"אינטראקטיבי" לא מחמירה את המצב.

אז, מה דעתכם?

-----------------------------------------------------------------------
Yes, Again. 
Another article with this unimaginative title. And no, it's not going to be what you might assume.

As you might have noticed here or here I don't like the term "manual testing" (unless, as is customary to say, you are testing a manual, in which case it is a fine way to describe what you are doing). Unfortunately, simply going about saying "don't use X" is very ineffective, and we need to suggest alternative wording that will be as compelling as the current one. Whether we like it or not, the words we've used until now have helped to create a thought pattern and define a strong distinction between writing code to test code, and humans testing software - those activities tend to be perceived as separate, and sometimes even performed by different people - at any rate, the point is that people are used to think of two activities, and therefore use two different terms to distinguish between them, so while saying "all of this is simply testing" is, in my eyes, preferable, it will be quite difficult to persuade people who are less versed in the world of testing to give up that useful distinction.
Currently, I'm aware of two ways to retain this differentiation, both of them I find lacking in some way. First there is Bolton & Bach's distinction between "Testing" and "Checking", which has a couple of problems: It was abused to try and make writing code to test inferior to playing with the software in person, it is not immediately understandable for people less interested in testing (i.e.: it needs to be explained), and it does not translate well to Hebrew (and possibly to other languages).
Second, there's an odd trend of used "Exploratory" as a euphemism for "manual" - While the terms can be traced back to Bach & Bolton as well, I don't think I've heard any of them use this odd term (which makes sense, as they retired their use of "Exploratory testing" for good reasons). Using "exploratory" in a manner that means "manual" has even bigger problems - First and foremost, It is a blatant lie. When I write a piece of "automation", I am actively exploring. The same is true for reading the run reports. Second, this too has no meaning on its own for the laymen -  the idea that software testing is an act of exploration is not a common concept outside of specific testing paradigm & communities - most people are more "just do your thing so that we can ship" (or worse - "make me some me quality") . Using "exploratory" in this sense feels like a politically-correct way to say "manual", and like most P.C. language - it is useful only for a very short time until the derogatory meaning and prejudice catches on with the new term. In addition, as it comes from the same idea-space as did testing and checking, it is easily used to demote "automation" and get again into that purposeless superiority struggle.
So, What I'm looking for is a term that can at the same time go along with people's habits (so automation remains "testing"), maintain the needed distinction between the two activities (people's habits, did I mention them?), be intuitive to understand and convey enough self confidence to co-exist peacefully with automation without the need to defend it and enter a futile & harmful battle. If possible, it should also help narrow the mental gap between the two activities.

Some time not too long ago, a thought hit my mind. I'm not sure where or when, but it has been there for a while now. How about interactive testing ? In essence, I feel that it helps address my problems with the other terms and stands up for most of the goals I want it to.
First of all, it is intuitive to understand - we use "interactive" in our day to day life and contrast it with "automated", so there's no surprise when we use it ("automated" \ "exploratory" is an odd axis, "automated"\"interactive" is as common and as natural as "automated"\"manual").
Second, it is positively charged: "interactive" is used most of the times to represent an advantage, or a desired result. For instance, have you heard about Interactive Learning?
Finally, is conveys a clear meaning of what testing is - "interactive" is used to imply cognitive involvement of the human(s) interacting with the object of interaction. Unlike "manual" which implies boring, repetitive work, "interactive" should be interesting and captivating.
As a tester, I do testing. Some of it will be automated, some of it will be interactive - There's no need to explain any of those terms, and my feeling is that none of them needs to be defended against the other. Some things are better if automated, some are better in an interactive form.
One point that is not solved by using a better term is the mix people make between type of testing and type of tester - having an "interactive tester" is as meaningless as having a "manual" or "automated" tester (another long time rant of mine). But, hopefully, it does not make the situation any worse on that front.

So, would "interactive testing" work for you? I would love to hear your thoughts.

Thursday, March 22, 2018

Deadlines

A month ago I asked here for volunteers to keep tabs on me in order to set a short-term improvement goal and then go find an "accountability buddy" (to borrow the name from Lisi & Toyer) with whom I might either share a similar goal, or simply keep tabs on each other as we each go on their own goals, with the idea behind it is to drive each other to put the required effort.
I was fortunate enough to have Lisi ping me a few days back and help me formalize some of the thoughts that were running through my head by placing them in a mindmap. I might go occasionally and add some stuff to it.
Narrowing stuff down, I managed to remain with three goals I'd like to do first:
1. Create a personal "talk library". Creating a conference talk, for me, is still quite an arduous task - finding and idea and formulating it
2. Go over BBST free material and invest time to learn it (probably as a preparation to taking the course)
3. Building a project I promised my dad a while ago and got distracted by a lot of other things.

So, want to help me keep track and achieve one of these goals? All you have to do is choose a goal you want me to help you, and ping me in some way. We'll set up a way to check on each other later.

Since I'm not sure what to choose, I'll go with "first come, first serve" - If someone wants to join on one of my goals, or simply find one of them interesting to listen to - I'll go with that.

In terms of timelines - next couple of weeks I'll be busy with a the upcoming holiday (It's Passover time, so next week is cleaning, and the week after is friends and family), and probably the following month I'll be working on a slide-deck for a talk I committed to give at a local meetup (that's my way of forcing myself to prepare talks for conferences in advance), so I'm hoping to start working towards one of those goals on May 1st, and would love to find a partner for the journey up until then.

Anyone wants in on that ?

People problems suck

(No Hebrew, it's enough to wallow in it one time)

So, today our manager informed us someone on the team is being sent home.
I don't think anyone on the team was surprised by the decision, as we all were experiencing some of the difficulties for the past 6 months, but even so, and even if we might think it was the correct decision - it is no fun.
What is really upsetting is that everyone in the team knows that this person had their heart in the right place - they cared, they tried their best, and then some, and really cared for what they were doing. The question buzzing in my head (and in others, according to some corridor talks after the announcement) is "Did we do enough to try and avoid this?" After all, we often say that caring and trying can take one a long way. So, could we have done anything different to fix the situation in any other way then letting that person go? Could (and should) we have done more than what we did?
When I look back, it seems to me that all the symptoms were originating from one core problem - We did not manage to make the team a safe environment for that person to try and fail, in part because the way this person was failing was hurting the team and in part because we didn't try consciously to do so - we just assumed that everyone feels safe to fail and missed it when it wasn't. This, obviously, only made things worse, because when someone is not feeling safe to err, they default to not doing - which is only another failure that adds pressure and make everything spiral down really fast.

I'm not sure if I have any concrete conclusions out of today.
Or, in other words - bummer.

Monday, February 26, 2018

ETC 2018, did I say it was awesome?



Yes, I did. The first part is here.
However, that was only the first day of the conference.
The second one started with a nice breakfast where I got to speak a bit with Abbey and Llewellyn and as we were getting (a bit late) to the opening keynote of the day, Llewellyn shared an awesome strategy of getting changes to an open-source project you use: Hire the maintainer for a day or a week to make the change with you - that way the feature you need will find its way to the core product (so no need to fork your own version of the tool and enjoy updating). It will also probably be way cheaper to get your solution, as the maintainer knows the project very well, and by pairing with them you can add your specific domain knowledge to the solution.

Then we got to the keynote, just as the speaker was starting. Topic of the talk: become a skeptic.
The talk left me with a somewhat ambivalent feeling: On the one hand, it was very well presented by a speaker that clearly knew what he was doing. On the other hand - it felt a bit lacking in terms of content, and more so - actionable content. Sure, I can get intuitively why being skeptic might help a tester, but it felt a bit like preaching to the choir: I couldn't find any real, concrete reason to become a skeptic, and I am not really convinced in the value of skepticism as a tester's main approach.

However, after the keynote I got to Abbey & Lisa's workshop on pipelines. What can I say? It was great, with good exercises and even better explanations between them. Within the very limited time-frame for this workshop (it can totally be a full day one, I suspect. Or at least 1/2 a day) we managed to decide on a pipeline based on the pain points each of us have at work, and got to realizing our pipeline is waaay too long (we estimated a week to go through everything there). It is interesting to see how much of a discussion one can get simply by laying out the process your code goes through to production. I really enjoyed this workshop.

Then, a tough choice between 3 talks I wanted to go to I've attended Alex's talk on exploratory testing, and about practicing speaking out the way we test. If you have not yet got the chance to hear Alex speak, you should. The talk was sheer fun (or rather, sheer learning fun) and I liked the way she managed communicating her thought process and involving the audience in the exercise.

Following this talk I attended Mirjana talk about production monitoring and some of the tools they are using. This one was particularly interesting for me, as almost all of the tools she mentioned are either used by people at my work, or are intended to be used somewhere soon (I even participated in a POC for some of them) and seeing some of the benefits she was able to get out of those tools was really nice. It also connected well with something Gojko mentioned in the opening keynote: make stuff visible for the developing team. Great insights are gained that way.

The open space is always a great event, and this one was not any different. One thing I need to do is practice more self-restraining, and limit myself to owning only one subject, as there are always so many great topics. I started by going to a discussion led by Ron about how to train new testers. Apparently this is a tough question for all of us - we know to do this by mentoring or pairing, but teaching this in a mass-centered way is posing some difficulties. Sadly, I left the discussion early due to a mistake on my part with regards to the next session start time, so I had 20 more minutes before the discussion I led. Instead I joined a discussion about management. Then there were two discussions I posted: Tools & the way they change the way we think, from which I gained insight about the way some tools changed the processes of the team, and the need to constantly monitor the effect of new tools on the team culture. The second discussion was a bit tougher - how to help a colleague who's struggling to keep up, and when to give up. My takeaway from this discussion - Different things might work for different people, and don't give up easily (However, you'll know when you've given up, so don't prolong it more than is necessary) .

Great day, isn't it?
We had a blast closing it with a Keynote by Dr. Pamela Gay about some of the challenges she faces in her work in NASA, which, in case you wondered, includes identifying craters on Mars or on the moon and correlating pictures taken by astronauts with google-maps. Both tasks are difficult for professionals and for computers. However - people are great, and are willing to help, if you are willing to filter out some of the data. The coolest part? You can join the effort (but please wait until tomorrow at least).

Then, the conference was done. Or, mostly done - a lot of people met for dinner and we had some fun chatting around. It is amazing how when the conference is over, it seemed that almost everyone around just wanted to extend the experience just a bit more. It was really tough to do the "responsible" thing and go to sleep early in order to catch a cab at 5 AM to the airport. Still, this is what I did.
At the morning I shared the taxi with Abby, so I got to extend the conference ambience to the last possible moment (though, I must admit - at 5 AM, the ambience is mostly sleepy).

What amazes me is that while the sessions themselves are really good, what makes this conference so great is in the more difficult to tell about, small moments: speaking with new people and those I've met before, seeing everyone around me smiling (to themselves and to each other), and sharing an experience. My only regret is that I did not get to spend more time with people, and some people I wish I could catch up with a bit more. However, I will follow the advice given by the conference organizers at the open space: Whatever happened is what should have happened, and it could not have been any other way. I'm very happy things were as they were.

So, until next year :)



Thursday, February 22, 2018

ETC 2018, it was simply awesome


(This is part one, as it came out a bit long, the next part will be out in a few days)
European Testing Conference is over, and it was the best ETC so far. Each year I come to ETC with higher expectations, and each time they somehow manage to surpass them and look as if it is the natural order of things. There will be a retrospective post for me later, but for the meanwhile, I want to sort out some of my experiences from the conference days (I wrote briefly about the days before the conference here).
The morning started with a nice breakfast at the hotel, getting to chat a bit with some people (With whom - I don't remember. Or rather, I remember most people I talked to, it's only the when that is a bit fuzzy) and after that - registration and the first keynote in which Gojko Adzic presented his newfound approach to automatic visual validation. His main message was - UI tests are considered expensive, but now we have the ability to change this equation - not because of the tool (that looks nice, I got the impression that it was some sort of a mix between Applitools eyes (comparing really small elements, defining textual conditions, and Galen framework), but rather because we can now parallel a whole lot of test runs using headless chrome on AWS lambda. So sure, this won't work for you if you are not working on AWS, or can't parallelize your tests, but it's a nice thing to consider, and see how far can we go towards this sort of goal.
Following the keynote I went to a talk given by Lisi & Toyer. Frankly, I came to this talk with very low expectations - sure, another "share and collaborate" talk. Perhaps this is why my mind was blown. Toyer & Lisi managed to tell an interesting story about how they created a "pact" with a specific goal in mind, and how many benefits they got from it. I think that what really got me, though, was the genuine excitement they expressed around the whole process. I went out of this talk with a strong feeling of "that's a great idea, I should try it one day" (and, since most of the times "one day" equals "never", I'm looking for a volunteer to smack me on my head if I don't make anything more concrete out of this within 30 days, in this very blog).
Then came speed-meet. Since last year, I learned to notice the good things in it - it forces people to open up and speak to people they don't know, and really breaks the ice fast. Still, it was a bit too loud for me. How loud? This loud. One thing I did learn before is to completely ignore the mindmap drawn by my partner and tell them I'd rather look and listen to them and not to a piece of paper, so that helped a bit. I still got to shout towards some people I've never spoke before, and people I didn't speak with enough. I think I only needed a silence bubble in order to properly enjoy this event.
Following the speed meet, and one minute alone in a quiet corner to recharge and let my ears some rest, there was lunch, with quite a nice setting to help people talk some more, this time in a quieter manner.

After lunch - workshops time!
A while before the conference I've decided to go to the Gherkin workshop (I don't like calling the given-when-then formulation BDD, since for me BDD is a lot broader than that) in hope that I'll manage to figure out why some people find this artificial, restrictive format useful. Or, at least, learn when to use such a thing and when not to. Going through a workshop with some experts seemed to be the best chance I could give it.
Well, apparently, I should have read the fine print better - the workshop was targeted towards the already convinced. Those who are using, or planning to use, the Gherkin formulation and want to learn how to do so better. I got to see some bad examples, discuss why they might be bad, and how to write a proper one. Frankly? Initially I thought that it was a well built workshop that I came to with the wrong expectations, but the more I think about it, the more I believe it was a waste of everyone's time. Writing a single gherkin scenario is easy. The tips we got there were trivial (and easy to find online) and the discussion level was not deep enough to justify our time (nor I think it should have been). A better workshop, still aimed at the users, should have been how to maintain a good suite of Gherkin scenarios, as even a relatively small number of well defined scenarios can become a terrible task to read and understand when there is no way to organise them. My personal limit before asking for a different format stands around 5 scenarios. If I have to read any more, the rigid format is becoming actively harmful.

Anyway, rant time over, and I had a talk to prepare to. After dealing with some technical difficulties (I knew I had to purchase new batteries for my clicker) and tweaking the slides a bit to make sure that everything on the slides was visible, I started talking a bit about automation and some ideas on structuring a part of it. The slides can be found here (and will soon be available at the conference site). I got some valuable feedback from Richard Bradshaw after the talk, and as far as I can tell - it seemed that the audience response was good (thank you Mira for your very kind words).

I then had a chance to relax a bit during lean coffee, which always feels too short (In fact, checking the schedule, I see we didn't even have an hour - it was too short!), but I got to have an interesting discussion with new people I have yet to meet. I think I need to become a bit better at facilitating the discussion, but it went rather well even so. Between this and speed-meet, I have a better experience meeting people this way.

We went on discussing the subjects at hand until the day's closing keynote where Lanette was sharing a whole lot of cat pictures and an interesting point alongside them.

I was a bit tired after such an intensive day, which was not over just so - the conference dinner event was scheduled for this evening, and so I went. Nice people, nice vibe, and everyone got a free drink. The place itself, though, felt like a restaurant, and so people were sitting at their tables (A large table, but still) instead of wandering about. I had a nice chat with Karlo and Emily, but finally my fatigue got the better of me and I took a tram back to the hotel to crash.

Monday, February 19, 2018

ETC time again!


So, it's this time of the year, and this year the European Testing Conference takes place in Amsterdam.
I got here early (Thursday) to make sure I get to tour the place a bit, and that my feet are properly sore before we start (My favorite way of touring a new place involves a lot of wandering around, so I started my first touring day in ~10 hours of strolling), and the city has been very welcoming - with great weather (a bit chilly, but bright and sunny - just the way I like), beautiful sights and some very good tourist attractions (I highly recommend taking a free walking tour , and the Rijksmuseum is very impressive).
I started the conference early in a very good way by meeting Marit in Saturday for a really nice chat and interesting food.
Then, come Sunday, after paying a visit to one of our data center (from the outside, I'm not permitted to enter) and strolling around the lovely moat they have around it, the conference started at speakers dinner. It never ceases to amaze me how friendly and welcoming can a group of people be, and how fun and natural if feels to talk with them, or even join by listening, since just about everyone there has a lot of interesting things to share.
So, an amazing start to what I expect will turn out to be a magnificent conference.

Wednesday, February 7, 2018

Reading Listening to books - part 4


TL;DR - I'm listening to audiobooks, some reviews below, and I would love to get some recommendations from you.

This is the 24th part of a series of (audio) book reviews Here are the previous posts:
Part 1
Part 2
Part 3


Crucial Conversations Tools for Talking When Stakes are High, Patterson, Grenny, McMillan, Switzler:
Short summary: A book about people skills. Specifically, how to have better discussions.
What I have to say: I'm fairly ambivalent about this book. On one hand, it addresses a super-important subject. On the other hand, I was very alienated by the examples in the book.
Starting with the good stuff - the authors coin the term "crucial conversation", which are conversation that might have significant outcomes. Some are easy to detect - trying to agree upon a big business decision, asking for a pay raise, or deciding whether to relocate the family. Other conversations might turn crucial in a very rapid manner - a minor disagreement becoming a shouting contest, a family dinner resulting in multiple people sulky and hurt, or a routine work meeting where the wrong decisions are being made because people are not really listening to each other.
People, so it seems, are really bad at speaking - despite doing so for most of their lives. And just to make things more fun, people are acting even worse when they need to be at their very best thanks to the all familiar fight\flight mechanism that kicks in in stressful situations. Some people, however, seem to do better than others - and this book tries to explain how they do that.
The overall strategy, as far as I understood, is "pull out, relax, calm others, build confidence and start thinking together instead of trying to 'win an argument' ". Naturally, I'm simplifying things here, and skipping some of the tools they mention to actually do all of those points, but I think this is the core of all processes in the book.
When sticking to the principles and intentions mentioned in the book, I found myself agreeing vehemently. It does sound like a very compelling way to approach potentially difficult conversations, and some of the tools actually makes a lot of sense. It is only when I got to the examples that I started feeling a bit off - sure, the examples are simplified to make a point, but as I was listening, I found myself sometimes wanting to punch the teeth out of the example speaker. It is then that I started wondering whether the book is heavily biased towards American culture. For example, in the fifth chapter a technique called "contrasting" is presented. In short, it's a way to neutralize suspicion by acknowledging it, and the example goes as follows: "The last thing I wanted to do was to communicate that I don't value the work you put in, or that I didn't want to share it with the VP, I think your work has been nothing short of spectacular". When I hear something like that, I assume that someone is lying to me and trying to weasel their way into a hidden goal. Living in a much more direct (and way less polite) society, I feel such statements to be pretty glitter meant to cover up some ill meant actions. There are ways to phrase such a message in a way that will be acceptable for me, but this is not one of them. This lead me to think - it seems that the components of effective discussions mentioned in the book are very aligned with the stereotypes I have about the American behaviour patterns. There isn't a single example that I can find (audiobooks are not easy to scan quickly), but almost every example felt a bit off - a bit too polite to be real, a bit too artificial to be convincing, and in some cases, simply achieving the opposite goal: Sowing suspicion instead of trust, seeming detached instead of concerned, and so on. It reminded me of something a friend who has relocated to the states has told me: "At first it was very nice that everyone around were very polite and kind. After a while it started feeling phoney and annoying". All in all, the book left me thinking that in order to really benefit from this content, I would need a localized version of it, where the principles were taken, assessed and modified to match the culture, and the examples updated to something a bit more realistic. Given time and need, I think I can do some of it myself, so this is a book I intend to get back to in the future.

So,  those are the books I've listened to recently (and currently listening to Great Mythologies of the World, that won't receive a review here, being unrelated, but I think it's generally quite nice) and I'm gradually compiling a wish-list to tackle one at a time. What are the books you think I should listen to?