
FAQ
AI -> Artificial Intelligence
ML -> Machine Learning
While there are several good definitions for these terms as they have been in use over six decades, we use the following definitions from a practical perspective.
Artificial Intelligence is about behaviors computer systems may exhibit that are commonly associated with human intelligence. We break the behaviors at a high level as follows: 1) Learn from data (yes everything is data to computers), 2) Organize what is learned from data, and 3) apply what is learned to solve problems. We recognize this is a very simplistic view. We will assure you that keeping it simple helps.
Machine Learning is about the first behavior we mention — Learn from data through algorithms.
Generally they are termed as black box models as they don't explain their decision processes very well.
We define black box models to exhibit the following:
1) They don't explain their decision processes
2) They don't accept course corrections through Advice
3) They don't communicate during decisioning steps and only provide final results
ML models are typically generated by specific algorithms through model training processes. Traditional ML models generated by ML algorithms such as scikit-learn, XGBoost, TensorFlow, etc., are generated as single software module. They behave in all-or-nothing manner.
We define monolithic ML models to exhibit the following:
1) They are generated as single software module
2) They are managed as single software module
3) They are deployed as a single module, perhaps on a pipeline with other modules
4) They are replaced/upgraded as a single software module
5) Their accuracies are measured at model level even through they provide multiple decisions
6) In a multi model ecosystem, models communicate only at the end of their processing without visibility into changes in the ecosystem that may make their conclusions invalid
SHAP stands for SHapley Additive exPlanations and are used to explain the output of a machine learning model.
They are termed as reactive because they explain the decisions after they are made by a model for a specific input. Typically, traditional ML models are built first and then Data Scientists curate a set of test inputs to interpret how well the ML model makes its decisions in a comprehensive manner. Thus, the explanations are entirely dependent on the curated datasets to be comprehensive.
SHAP explanations are specific to a model and associated inputs . Thus, if AI is built from multiple ML models, SHAP does not provide composite explanations at AI level. Many real life business problems require multiple ML models to collaborate.
In such scenarios, businesses tend to create additional software to produce related datasets and combine SHAP explanations from multiple models to combine to explain AI. Thus, when ML models upgrade the explanation management applications must also be upgraded.
Yes. A few common considerations are important.
1) TeamingSpace will treat external black box ML models as external sources of data rather than decisions. Traditional ML model outcomes do not meet Open Decision Steps standard TeamingSpace has established.
2) It is feasible to use an external ML model's inputs and outputs to learn about the model where such approach is permitted. In this use case, TeamingSpace Open Decision Steps can be used to understand how your current black box model is making decisions.
Traditional ML algorithms such as scikit-learn, XGBoost, TensorFlow, etc., do not generate outcomes that adhere to Open Decision Steps, that TeamingSpace has established.
However, TeamingSpace AI Platform's python environment does not prevent the use of other ML libraries. Traditional ML models are treated as external sources of data.
TeamingSpace ML algorithm outputs have been compared with traditional black box ML models and SHAP/LIME provided reactive model explanations for a variety of datasets. We anticipate clients using this option to validate if TeamingSpace ML algorithm outputs are acceptable.
Additionally TeamingSpace Open Decision Steps are implemented in multiple formats to help with validations. The accuracy measures associated with Open Decision Steps from algorithmic learning process is verifiable through SQLs. Accuracy measures associated with Open Decision Steps from Agentic AI use is verifiable using Decision Path Analytics reports.
TeamingSpace AI Platform will be releasing additional Knowledge Base verifications in subsequent releases.