Research Ethics Roundup: unkind science, data sharing standards, diversity in psychological research, and more

This edition of Research Ethics Roundup covers “unkind science” and the public’s waning trust, international efforts to develop data sharing standards, lack of diversity in psychological research subjects, and digital phenotyping. 

People will not trust unkind science 
Humility and respect, both towards fellow scientists and the public with whom scientists are hoping to engage, are critical to building trust in the institution of science. Without this commitment, the public has an inclination to doubt that scientists are acting in their best interest—an integral component of belief in and satisfaction with the process of discovery. 

What hope is there for those in science to build a trusting and respectful relationship with the public when so many scientists are schooled in a culture lacking these qualities? 

Genomics: data sharing needs an international code of conduct
Many institutions in many nations have been working toward a consensus on data sharing ethics and protocols, but there is yet more work to be done, according to eight researchers who collaborated on an international cancer project involving sensitive data. Lessons learned include the value of long-term continuity, the essential-yet-fraught reliance on cloud technology, and the need for a clearly-established code of conduct. 

“An international code of conduct for genomic data is now crucial. Built by the genomics community, it could be updated as technologies and knowledge evolve more easily than is possible for national and international legislation.” 

Psychology Still Skews Western and Affluent. Can It Be Fixed? 
After nearly a decade of discussion about the lack of diversity in psychology research, little, some critics say, has actually changed. The disparity in representation among research subjects has major implications not only for replicability and generalizability, but also ethical issues—including Belmont principles of beneficence and justice.  

The paper generated countless responses, meetings, and calls for reform. Widely covered in the media, it has since been cited thousands of times in the scholarly literature…. ‘But if you actually look at the numbers, the latest numbers coming in the last few years don’t actually show any shift in the diversity of samples. 

Our phones can now detect health problems from Parkinson’s to depression. Is that a good thing? 
Digital phenotyping—the process by which users’ inputs into digital programs are translated into discrete traits—is rapidly on the rise alongside the proliferation of accessible, cheap consumer technology. It shows tremendous promise in both diagnosis and treatment of health problems, but competing incentives between users, researchers, and technology companies mean that the practice poses considerable risks. 

Much of the clinical research on digital phenotyping has, in fact, taken privacy extremely seriously. But as this research starts to inform commercial products, a lack of regulations and the push for profit pose lots of questions for the future of digital phenotyping.