Daily CSR
Daily CSR

Daily CSR
Daily news about corporate social responsibility, ethics and sustainability

​Adding inclusion, protects privacy and data security in software design: Cisco


Technology companies must earn the trust of their customers and users as stewards of the data that makes modern life possible. This entails being mindful of how our products are made and used, as well as taking steps to mitigate any negative consequences.

Cisco works hard to design and build technology that respects human rights, promotes inclusion, and protects privacy and security—so that everyone can benefit from a more connected world.

Respect for human rights is a fundamental innovation principle when it comes to Powering an Inclusive Future for All. When it comes to artificial intelligence and machine learning, privacy, security, and inclusion must be prioritized in the design methodology.

The adage "goodness in, goodness out" certainly applies to training data sets, which often determine how product design and user experience take shape. Developers must ensure that these data sets are robust, diverse, and representative of all users. Failure to do so can lead to inaccuracies, negative user experiences, and unintended bias.

Consider Webex virtual backgrounds, which are intended to conceal users' surroundings in order to improve privacy, security, professionalism, and fun. Early research versions of this feature did not perform well for certain hair textures and styles, or lighting conditions, reflecting the state of the art at the time. They inadvertently filtered out parts of a user's appearance in some cases. Our engineers recognized the need for larger, more diverse training data sets that were representative of the Webex user base during the design phase before releasing the feature. We improved the training data by using data that was anonymized, ethically sourced, user-contributed, open-source, provided with explicit consent, and otherwise respectful of individual privacy.

As a result, more representative images and algorithms were created, as well as a much better and more inclusive user experience for all.

We built on the Webex team's learnings and launched our Responsible AI Framework in 2022, which is based on six principles: transparency, fairness, accountability, privacy, security, and reliability. Our Responsible AI Working Group promotes adherence to these principles by conducting Responsible AI Impact Assessments on new technologies, providing guidance on how to manage risk to human rights, and providing accountability through incident reporting of human rights, privacy, and security concerns.

If you would like to learn more about the progress we’re making towards a more inclusive future, click here.