Updated in December - 2025Â |Â Subscribe to watch greytHR how-to video
When you hear about AI being integrated into greytHR, you may naturally wonder: "Is my company’s sensitive data secure?" With the launch of greytHR NAVOS, we are introducing an intelligent, intuitive HR platform designed to help you work smarter. However, we understand that this innovation raises important questions about data privacy, especially around Personally Identifiable Information (PII).
We want to reassure you from the start: greytHR NAVOS is built with privacy and security at its core. Our guiding principle is simple: Your data is yours. NAVOS operates exclusively within your greytHR environment, upholding your role-based permissions, and never uses your company’s or employees' data to train AI models for anyone else.
Your data is never shared for training AI models outside your account.
AI responses are grounded in your company’s greytHR data and user permissions.
No public model dependency – your data is never processed on public platforms like OpenAI or Google.
Access control is preserved – NAVOS sees only what you're allowed to see.
NAVOS does not use your company data, employee records, or any PII to train AI models for other users. Your data stays private and securely stored within greytHR.
The NAVOS is built on a “grounded” AI framework. This means that when you ask a query (for example, “What is John Doe’s leave balance?”), the AI securely retrieves the relevant information from your own greytHR account, based on your access permissions. It then uses its language model to present the answer in a clear, conversational way.
Unlike generic AI tools, the NAVOS doesn’t pull data from a shared or global training model. Think of it as a smart librarian—it doesn’t rewrite books from everyone’s collection; it simply finds the right book from your library and explains what’s inside.
NAVOS processes only the data required to respond to your specific request—nothing more. Here’s how it works across different use cases:
For AI Search: It uses your query (e.g., “add LOP days for [employee name] on [date]”) along with greytHR metadata to locate the correct report.
For AI Actions: It processes your instruction (e.g., “approve all pending leaves for [date/month]”) and accesses only the necessary employee data to complete the task.
For the AI Chatbot: It reads your question (e.g., “how to configure ESI settings”) and fetches the answer from our secure knowledge base and internal documentation.
In every case, NAVOS accesses only what’s needed to fulfill your request—within the boundaries of your greytHR permissions.
All AI-related processing happens inside greytHR’s secure, encrypted systems. We do not send your data to third-party AI services or public cloud models. The entire AI stack is an in-house solution integrated directly with your greytHR platform.
The NAVOS adheres to your existing greytHR permissions model. This is a fundamental security feature.
Permissions-Based Access: The AI will only provide information or execute actions that the specific user has been granted permission to. For example, a manager can only view and approve leave for their direct reports, and NAVOS will respect that same access boundary.
No Unauthorized Access: greytHR employees and the AI model itself do not have unauthorized access to your PII. Our team follows strict security protocols, and access is granted only on a need-to-know basis and always with explicit permission.
NAVOS stores your search and command history securely within your account. This is done to improve your experience over time. This history is not shared, and you can access or review it anytime.
While the AI is highly accurate, it is not infallible. We have implemented several safeguards to prevent errors:
Confirmation for Complex Actions: For any complex or critical action (e.g., "generate payslips for all new hires"), NAVOS will always ask for your confirmation before execution.
Feedback Loop: You can provide feedback on any incorrect or unhelpful answer, which helps the system learn and improve.
NAVOS adheres to the same high standards of compliance as the rest of greytHR. It aligns with major data privacy laws and frameworks, including:
GDPR (General Data Protection Regulation)
CCPA (California Consumer Privacy Act)
DPDPA (India’s Digital Personal Data Protection Act)
Because all AI processing happens within your greytHR account and under your permissions, compliance is built-in by design.
NAVOS is designed and operated to ensure fair, unbiased, and non-discriminatory behavior. The AI does not make autonomous decisions about employees, payroll, attendance, performance, or any sensitive HR data. All outputs are generated strictly based on user queries and existing system data, and the AI does not modify or infer information beyond what is already present in greytHR.
greytHR continuously monitors and improves AI behavior to avoid any form of discrimination based on gender, religion, caste, ethnicity, age, disability, or any protected attributes. Any action performed by the AI remains subject to user validation and follows the permissions configured by the customer.
The NAVOS is classified as a Low-Risk AI System under widely accepted AI governance frameworks (including India’s Draft AI Guidelines, Singapore’s Model AI Governance Framework, and GCC AI Ethics principles). This classification is based on the following factors:
No autonomous decision-making
No predictions that affect employee rights or outcomes
No biometric or sensitive attribute processing
AI operates strictly within customer-authorized permissions
All processing occurs within the greytHR environment
Human-in-control and human-in-the-loop safeguards are implemented
No cross-tenant or external model training occurs
As a low-risk system, NAVOS is designed to augment user experience, not replace human judgment or create legal or financial impacts.
greytHR maintains an internal mechanism to detect, log, and investigate AI-related incidents, including:
Incorrect AI-generated responses
Unintended access to data
Execution of unintended actions
System misuse or abuse
Repeated user-reported inaccuracies
In the event of a reportable AI incident, greytHR will:
Immediately disable or restrict the affected AI feature (if required)
Investigate the root cause and implement corrective actions
Notify customers where required by applicable law
Document the incident and preventive actions in the internal compliance log
Incorporate feedback into continuous model improvement
Complex or high-impact actions performed by users always require manual confirmation to prevent accidental execution.
NAVOS follows a structured AI lifecycle management process, including:
Version-controlled model releases
Internal testing and quality assurance before deployment
Evaluation for accuracy, safety, and permission alignment
Regression testing of all AI actions
No modification of customer data during testing
Clear communication of significant feature changes through release notes
greytHR does not deploy AI updates that impact customer workflows without appropriate validation. All updates are aimed at improving reliability, accuracy, and user experience while maintaining robust privacy and security safeguards.
All AI-related processing, including query interpretation, context retrieval, and action execution, takes place exclusively within the greytHR cloud infrastructure located in the customer’s applicable region.
No AI-related data, prompts, employee information, or PII is transferred to third-party AI platforms, public LLMs, or external systems for processing or model training.
Customer data always remains within the greytHR security and compliance boundaries.
All AI actions such as searches, navigation triggers, and data retrievals are logged securely within the greytHR application.
These logs may include:
Timestamp of the request
User ID and permissions at the time of the request
Action performed or content retrieved
Whether user confirmation was required
Outcome of the request
These logs help ensure transparency, traceability, and accountability, and can be used by customers during internal audits, compliance reviews, or incident analysis.
NAVOS does not perform any irreversible or sensitive actions without explicit human confirmation.
Users retain full control over:
Execution or cancellation of suggested actions
Editing or overriding AI-generated steps
Reviewing AI-generated information before acting
This ensures compliance with global requirements for human oversight, controllability, and safe deployment of AI systems.
Users must ensure that NAVOS features are used solely for legitimate HR, payroll, and compliance purposes.
Users are responsible for ensuring:
Queries do not request information beyond their authorized permissions
AI suggestions are reviewed before execution
Misuse is reported promptly through the support channel
greytHR reserves the right to temporarily disable AI features if misuse, security concerns, or policy violations are detected.
NAVOS operates entirely within your greytHR environment, upholding your existing role-based permissions and never using your company’s or employees' data to train AI models for anyone else. All data is securely stored and processed within the greytHR platform, ensuring your data is kept private and safe.
No. NAVOS does not use your company data, employee records, or any Personally Identifiable Information (PII) to train AI models for other users. Your data stays within your account and remains confidential.
NAVOS is built on a "grounded" AI framework. This means that when you ask a query (e.g., "What is John Doe’s leave balance?"), the AI securely retrieves the relevant, real-time information from your own greytHR account, based on your access permissions. It then uses its language model to clearly and conversationally present the answer.
NAVOS accesses your data only when necessary to respond to a specific request. For example:
AI Search: It retrieves relevant information (e.g., reports or records) based on your query.
AI Actions: It processes your command (e.g., “approve all pending leaves”) and uses only the necessary employee data to complete the task.
AI Chatbot: It accesses the secure knowledge base to provide accurate responses to your questions.
No. All AI-related processing happens within greytHR’s secure, encrypted systems. Your data is never sent to third-party AI services or public cloud platforms.
NAVOS strictly adheres to your existing greytHR role-based permissions. This is a fundamental security feature. The AI will only provide information or execute actions that the specific user has been granted permission to. For instance, if a manager can only view leave for their direct reports, NAVOS will respect that same access boundary.
No. greytHR’s employees and the AI model itself do not have unauthorized access to your PII. Our team follows strict security protocols, and access is granted only on a need-to-know basis, and always with explicit permission.
Yes, NAVOS stores your search and command history securely within your account to improve your experience over time. This history is not shared outside your account, and you can review it anytime.
NAVOS includes several safeguards to ensure accuracy:
Confirmation for Complex Actions: For any complex or critical action (e.g., "generate payslips for all new hires"), NAVOS will always ask for your confirmation before execution.
Feedback Loop: You can provide feedback on any incorrect or unhelpful answer, which helps the system learn and improve within your secure environment.
No. NAVOS is included as part of your greytHR platform. There are no additional charges for using it.
Yes, NAVOS adheres to the same high standards of compliance as the rest of greytHR. It aligns with major data privacy laws and frameworks, including:
GDPR (General Data Protection Regulation)
CCPA (California Consumer Privacy Act)
DPDPA (India’s Digital Personal Data Protection Act)
No. NAVOS does not make autonomous decisions about employees, payroll, attendance, performance, or other sensitive HR inputs. It only provides outputs based on your query and the data already available in your greytHR account.
No. NAVOS does not modify, infer, or generate information beyond what exists in your greytHR data. It only retrieves and presents information you are authorized to access.
greytHR continuously monitors AI behaviour to prevent bias or discriminatory patterns related to gender, caste, religion, ethnicity, age, disability, or other protected attributes.
Yes. All actions remain subject to user validation and follow your configured permissions.
NAVOS is classified as a Low-Risk AI System under widely accepted AI governance frameworks. This classification is based on the following factors:
No autonomous decision-making
No predictions that affect employee rights or outcomes
No biometric or sensitive attribute processing
AI operates strictly within customer-authorized permissions
All processing occurs within the greytHR environment
Human-in-control and human-in-the-loop safeguards are implemented
No cross-tenant or external model training occurs
No. It is designed to assist users, not make legal, financial, or employment-related decisions.
greytHR has an internal mechanism that logs and investigates incidents such as incorrect responses, unintended data access, accidental actions, misuse, or repeated inaccuracies.
greytHR may:
Temporarily disable or restrict affected features
Investigate and fix the root cause
Notify customers if required by law
Log actions for compliance
Apply improvements to prevent recurrence
Yes. Any complex action requires explicit user confirmation.
Every model version undergoes internal testing, quality checks, regression testing, permission validation, and safety reviews.
No. Updates are released only after ensuring that customer workflows remain unaffected.
Yes. Significant AI-related changes are shared through release notes.
No. All AI processing happens inside the greytHR cloud infrastructure within your applicable region.
No. No prompts, PII, or HR data leaves the greytHR environment or goes to third-party AI platforms.
Yes. NAVOS securely logs:
Request timestamps
User permissions
Action details
Whether confirmation was required
Final outcomes
Yes. They support compliance reviews, internal audits, and incident analysis.
No. Irreversible or sensitive actions always require manual confirmation.
Yes. Users maintain complete control to review, edit, or cancel any AI-generated step.
Users must ensure:
Queries stay within their authorized permissions
AI suggestions are reviewed before execution
Misuse is reported promptly
Yes. greytHR may temporarily restrict features in case of misuse, security risks, or policy violations.
Related articles:
▶ Video - Watch our how-to videos to learn more about greytHR.
📢 Product Update - Read about the product updates.