"Doveryai, no proveryai" is a Russian proverb meaning "trust, but verify" and was a phrase adopted by Ronald Regan in the 1980's when negotiating with the Soviet Union on nuclear weapons. Its a phrase that is also one of the foundations behind crypto-currencies / blockchain's where every node needs to verify data received to ensure security.
Is it ("Trust, but verify") still relevant today? Well in today's technology world Open Source software and code libraries are taken on "trust" that they work, so why do we seem to "distrust" code created by our own colleagues? What's the difference really...?
This is something that has always puzzled me. Code is code is code, right? If the code works - that is passes the tests (automated/manual/unit/sanity etc) - and the output from the code is as you expected it, then that code should be "trusted". Verification has been applied, and its all good.
So why do we behave so differently to code written within an organisation to that outside of it? Too often I hear that the code "isn't how I would write it", "we need to rewrite this it doesn't meet our styles", "it was written by xyz and their code was poor"... Is any of that relevant in a world full of reuse?
If reality, most software development today is not really written from scratch, the availability of open source libraries, code sharing hubs, pre-prepared packages means that essentially software creation is a bit like using Lego, pick and mix from a selection of pieces and choose the ones that best suit to build the application - here its a process of "verify" that the one chosen does what it claims to do, and then its taken on "trust" that it will continue to do so, contains no nasties and its bug free. And generally, this turns out to be the case - I read a statistic somewhere (can't remember where) that its estimated 97% of software in use today contains some open source elements - that is a whole lot of "trust" being undertaken in the software world of software development.
And, in most cases this works well, software is written quickly and effectively, money is spent on the value-add new development and not on re-inventing wheels plus it means creative effort can be focused. IT does however tend to get security focused colleagues worries up - however, there are much better ways to tackle security which will be a topic of a future post.
So, given this level of trust with the use of 3rd party open source code; why does trust break down in other areas? Why will software engineering teams often be internally critical of each others code and don't take this on "trust" - if its been "verified" that it passes unit/automation test and a sanity sweep why does internally developed code often then get the look of "distrust" and a "not invented here" so I don't like it approach? If we are willing to trust code developed by people we have never met that's been published out onto the internet why are we more reluctant to trust code created by colleagues sitting a few desks away?
There is an argument that a lot of open source software is created by organisations in some form, this is true - however, a vast amount isn't - a significant proportion is in reality created by individual engineers who have donated it back out to benefit the wider software engineering community.
I think sometimes its worth remembering back to "trust, but verify" and remembering it applies to our own colleagues code too - if you have verified it does what is both expected and needed of it then trust it, use it and move onto the task at hand of creating new things that really drive change and add value. There is no value in reinventing the wheel.
Is it ("Trust, but verify") still relevant today? Well in today's technology world Open Source software and code libraries are taken on "trust" that they work, so why do we seem to "distrust" code created by our own colleagues? What's the difference really...?
This is something that has always puzzled me. Code is code is code, right? If the code works - that is passes the tests (automated/manual/unit/sanity etc) - and the output from the code is as you expected it, then that code should be "trusted". Verification has been applied, and its all good.
So why do we behave so differently to code written within an organisation to that outside of it? Too often I hear that the code "isn't how I would write it", "we need to rewrite this it doesn't meet our styles", "it was written by xyz and their code was poor"... Is any of that relevant in a world full of reuse?
If reality, most software development today is not really written from scratch, the availability of open source libraries, code sharing hubs, pre-prepared packages means that essentially software creation is a bit like using Lego, pick and mix from a selection of pieces and choose the ones that best suit to build the application - here its a process of "verify" that the one chosen does what it claims to do, and then its taken on "trust" that it will continue to do so, contains no nasties and its bug free. And generally, this turns out to be the case - I read a statistic somewhere (can't remember where) that its estimated 97% of software in use today contains some open source elements - that is a whole lot of "trust" being undertaken in the software world of software development.
And, in most cases this works well, software is written quickly and effectively, money is spent on the value-add new development and not on re-inventing wheels plus it means creative effort can be focused. IT does however tend to get security focused colleagues worries up - however, there are much better ways to tackle security which will be a topic of a future post.
So, given this level of trust with the use of 3rd party open source code; why does trust break down in other areas? Why will software engineering teams often be internally critical of each others code and don't take this on "trust" - if its been "verified" that it passes unit/automation test and a sanity sweep why does internally developed code often then get the look of "distrust" and a "not invented here" so I don't like it approach? If we are willing to trust code developed by people we have never met that's been published out onto the internet why are we more reluctant to trust code created by colleagues sitting a few desks away?
There is an argument that a lot of open source software is created by organisations in some form, this is true - however, a vast amount isn't - a significant proportion is in reality created by individual engineers who have donated it back out to benefit the wider software engineering community.
I think sometimes its worth remembering back to "trust, but verify" and remembering it applies to our own colleagues code too - if you have verified it does what is both expected and needed of it then trust it, use it and move onto the task at hand of creating new things that really drive change and add value. There is no value in reinventing the wheel.
This comment has been removed by the author.
ReplyDeleteThank you for sharing your thoughts on trust, verification, and code in software development. It is true that in today's technology landscape, trust plays a vital role in utilizing open-source software and code libraries. The concept of verifying data and ensuring security is fundamental, as seen in the context of blockchain and cryptocurrencies. Games, such as sigma game, can play a vital role in providing entertainment, promoting social interaction, and even enhancing cognitive abilities. They offer a platform for relaxation, creativity, and problem-solving. Games have evolved into immersive experiences that bring people together, whether through competitive multiplayer or cooperative gameplay.
ReplyDelete