AI Security in Hotels and Accommodation Facilities

Privacy Shield

Artificial intelligence tools compliant with GDPR for hotels, guesthouses, hostels, and other accommodation facilities. Full guest privacy and data security.

AI Security in Hotels and Accommodation Facilities

See how it works

Your data WILL NOT be used to train AI models
We use end-to-end encryption
Role-based access control and SSO login are available
We regularly perform OWASP tests
We ensure GDPR compliance

Flexibility in LLM selection

CogniVis enables integration with various large language model (LLM) providers such as OpenAI, Anthropic, and others. This flexibility allows cost and performance optimization, as well as quick adaptation to technological changes.

Compatibility with future models

We design integrations with the future in mind. When a stronger model appears, you can enable it without system overhaul.

AI advancement = better performance at your property

New generations of models provide more accurate answers, better context understanding, and higher task automation.

No vendor lock-in

If Anthropic releases a model better than OpenAI's, you switch operations with one decision. No content migration and no downtime.

Greater cost efficiency

You choose the provider with the best quality-to-price ratio. Premium models for complex tasks, cheaper ones for routine.

Failure risk reduction

When the primary provider experiences an outage, traffic is switched to an alternative model. Continuity without manual intervention.

Experiments and comparisons

You enable A-B tests and benchmarks between models. Decisions are based on quality, time, and cost metrics.

Supported providers

CogniVis + LLM on your infrastructure

We can deploy both CogniVis and the LLM model locally (on-premise) on your own infrastructure, without the need to use public cloud. This gives you full control over your data and allows you to meet even the most stringent requirements.

OpenAI

Strong language understanding and a stable tool ecosystem. Best for most use cases.

Anthropic
Anthropic

More expensive models focused on precision and maximum accuracy.

Amazon

Wide range of models and integration with AWS. Good if you already use Amazon cloud.

Google

Cheaper, lighter, and faster models with strong multimodality.

Meta

Local models which we deploy directly on your infrastructure, without public cloud.

Microsoft

Models from Azure OpenAI Service offering the highest cloud security standards.

Deepseek

Very cheap and lightweight models that can also run locally, without cloud.

Mistral AI
Mistral

European provider with very competitive pricing and efficient models.


Frequently Asked Questions

Will my data be used to train AI models?

No, CogniVis does not process or store customer data for AI training purposes. We only use services from LLM providers who commit not to train on data entered by users. Alternatively, we can deploy CogniVis using local LLMs installed on the client’s servers, which do not require data sharing.

Where and for how long is my data stored?

Data is by default stored in Poland (EU) or, upon request, at the client’s premises or with a selected cloud service provider. When using LLM platforms, data may be processed by external providers (e.g., OpenAI) or stored locally, depending on the selected model. For CogniVis, data is stored for the entire duration of the contract and deleted 30 days after its termination.

How is my data protected during transfer?

Data is protected with TLS 1.3 encryption during transmission. For example, during knowledge indexing, data is retrieved via APIs secured with TLS 1.3, and when sending messages to LLM systems, communication is likewise encrypted with TLS.

What security tests are conducted?

After each deployment, our software undergoes penetration testing in accordance with OWASP standards to identify and mitigate security threats.

How is the API protected against unauthorized access?

APIs are secured with TLS 1.3 encryption for every endpoint and an additional token-based authentication layer for endpoints that require it to prevent unauthorized access and misuse.

Is there a backup and data recovery plan?

CogniVis offers regular backups as part of certain plans, providing redundant, secure storage and data recovery.

Can I deploy CogniVis on my own servers?

In cases that require it, we offer CogniVis deployment on the client’s servers or alternatively at OVH or AWS providers, which have SOC 2 and ISO 27001 certifications.

Do you support SSO?

Yes, integration with Single Sign-On (SSO) is implemented on request. SSO enhances security by centralizing authentication, reducing password-related risks, and enabling better access control through existing identity management systems (e.g., Microsoft Azure AD).

What access controls are available on the platform?

CogniVis uses role-based access control (RBAC) with user groups for precise permission assignment. When integrating with specific third-party solutions, access levels for each data source are defined during implementation.

Does the CogniVis team see the questions I ask?

It depends on the settings. If encryption is enabled, we only see the number of questions and token usage—there is no access to the content of questions or answers. If encryption is disabled, we can analyze asked questions, which helps in refining and optimizing AI performance. We highly recommend keeping encryption disabled in the initial implementation phase to quickly tailor the system to your needs.

How does CogniVis handle data storage and deletion?

CogniVis adheres to GDPR-compliant data storage and deletion policies—transient data (e.g., cache, in-memory operations) is not stored longer than necessary and is deleted immediately after processing. Persistent data is stored only as long as needed to provide services and fulfill legal obligations. Data retention periods are regularly reviewed to avoid unnecessary storage. Clients have full control over their data and can request deletion at any time under GDPR Article 17. Upon deletion request, CogniVis ensures secure and irreversible data removal, with a maximum retention period of 30 days after contract termination.

What is the incident response process for data breaches?

CogniVis employs a structured incident response process to minimize impact and ensure transparency. 1. Detection and assessment - continuous monitoring detects threats. Critical incidents are analyzed within 4 hours of detection. 2. Containment and mitigation - affected systems are isolated, API keys rotated, and password resets enforced if needed. 3. Investigation and analysis - security experts determine breach causes and implement measures to prevent recurrence. 4. Client notification - affected users are notified by email as soon as possible, describing the breach, affected data, and recommended actions. 5. Personal data - relevant authorities are notified within 72 hours in case of personal data breaches. 6. Remediation and prevention - security policies are updated and enhanced measures implemented to avoid future incidents.

How do you enforce MFA and strong login mechanisms for your employees?

We apply MFA for all accounts, especially privileged ones, supporting mobile apps. Password policy requires at least 16 characters, prefers passphrases, checks strength, and blocks weak passwords. We integrate SSO and monitor unusual logins, reducing risks of account takeover.

How do you manage permissions and access audits in the system?

We implement the principle of least privilege, conduct periodic access reviews, and have an approval process for elevated permissions. We use IAM for centralization, automate onboarding/offboarding, and grant administrative access temporarily with just-in-time model. These measures reduce exposure and support audit compliance.

Do you maintain centralized logs and real-time security alerts?

We centralize application and system logs, establish behavior baselines, and monitor infrastructure in real time. SIEM generates alerts on anomalies and suspicious access attempts, and regular log analysis supports early incident detection. Operations on sensitive data are also logged, facilitating investigations.

What is your password policy and how do you securely store passwords?

Password policy requires at least 16 characters, prefers phrases, does not enforce rotation without incident, checks strength at creation, and blocks common passwords. Account lockout activates after failed attempts. Passwords are stored only as hashes with strong algorithms like bcrypt or Argon2.

Do you support secure use of personal BYOD devices?

BYOD is allowed only when no better alternative exists. BYOD policy applies, including containerization of corporate data, restricted access to sensitive information, and secure data removal procedures from devices. Employees are trained on safe personal device practices to reduce leak risks.

How do you secure remote access and connections for traveling employees?

Remote access is implemented via strongly encrypted VPN with traffic segmentation and usage monitoring. Network infrastructure undergoes audits and regular updates. Remote work enforces physical security rules, including clean screen and clean desk. These measures limit eavesdropping and device takeover risks.

What is your vulnerability update and management process?

We maintain an inventory of components and versions, prioritize security patches, and automate package management. Updates are verified in a test environment mirroring production, with regression tests and rollback plans. Backups are made before changes, and the entire process is documented and versioned for reproducibility.

What secure software development practices do you apply at each lifecycle stage?

We apply OWASP Secure Coding Practices, use vetted libraries, and enforce the principle of least privilege in code. We conduct code reviews emphasizing security and use SAST and DAST. Security is considered from design, and coding standards are enforced and automatically checked for consistency.

How do you validate input data and sanitize application outputs?

We rigorously validate all input data, preferring whitelists of values. Outputs are sanitized, and JSON/XML parsing and serialization are done securely. We use prepared statements and mechanisms to prevent XSS, CSRF, and injection attacks. Fail-safe is the default behavior to limit error impacts.

How do you test attack resistance and improve readiness?

We regularly conduct red team vs blue team exercises, phishing simulations, and incident scenarios. We analyze results, update detections, procedures, and training to boost team readiness. The test program is refreshed for new attack vectors and measures real response improvements.

What is your disaster recovery plan and DR testing?

We have a DR plan defining critical systems, RTO, and RPO. Recovery procedures for various scenarios are documented and tested at least annually. Test results are analyzed and improvements implemented to reduce downtime and minimize data loss during service failures.

Do you ensure redundancy and business continuity for critical services?

We design redundancy for key components as well as backup power and cooling. Failover tests are performed regularly. The continuity plan specifies critical functions, remote work modes, and alternate locations with resources. Defined communication channels and roles during crises speed up responses.

How do you implement secure data deletion, archiving, and minimization?

We apply data minimization and need-to-know principles. Automated mechanisms for deleting or archiving obsolete records and secure, irreversible deletion methods are implemented. Logs of sensitive data operations and regular access reviews reduce misuse risk and meet compliance requirements.

Do you use end-to-end encryption for sensitive data and API transmissions?

Data in transit is encrypted with TLS 1.3, and for especially sensitive information, additional application-level end-to-end encryption is used. Certificates are rotated and unencrypted connections blocked to maintain compliance with the latest standards. This strengthens confidentiality beyond just transport layer and protocol.

How do you secure and control access to APIs and external integrations?

APIs are protected using OAuth 2.0 or OpenID Connect, and identities are passed as JWT. Rate limiting is applied, usage monitored, and keys rotated. Parameters are validated, secure serialization is used, and APIs are versioned with deprecation plans and migration support. Documentation and tests are maintained regularly.

How do you manage open source components and their licenses?

We maintain automatic registries of open source components with versions and licenses. New components undergo multi-stage quality and security verification and license policy compliance. Project channels are monitored, and for critical dependencies, forks and own security patches are maintained when needed.

How do you secure mail and protect domains against spoofing?

Mail is filtered multilayer with reputation and content analysis plus attachment sandboxing. SPF, DKIM, and DMARC policies are maintained for domains with ongoing reporting and gradual policy tightening from monitoring to rejection. This significantly reduces spoofing and phishing in the organization.

Do you regularly test backups and backup storage?

We create full backups according to schedule and store them securely in remote locations. Recovery is regularly tested and procedures documented, with additional backups prepared before updates. These practices reduce restore time and limit data loss on failures.

How do you segment the network and secure network devices in the production environment?

The network is segmented using VLANs and firewalls between segments. Network devices are configured following the principle of least privilege, firmware is regularly updated, unnecessary services are disabled, strong password and key policies enforced, and logging and monitoring activated on all devices to detect anomalies faster.


Channels