Rancher Troobleshooting

Rancher Troobleshooting

2024/06/19 07:03:16 [ERROR] error syncing 'p-9858r/creator-project-owner': handler mgmt-auth-prtb-controller: projects.management.cattle.io "c-96kcw/p-9858r" not found, requeuing
Wed, Jun 19 2024 9:03:16 am2024/06/19 07:03:16 [ERROR] error syncing 'p-fsl78/creator-project-owner': handler mgmt-auth-prtb-controller: clusters.management.cattle.io "c-96kcw" not found, requeuing

First step check this documentation:

https://www.suse.com/support/kb/doc/?id=000020788

Some element couldn’t be delete because catch by the weekhook

$kubectl -n c-96kcw patch project.management.cattle.io p-fsl78 -p '{"metadata":{"finalizers":[]}}' --type=merge
project.management.cattle.io/p-fsl78 patched (no change)

$kubectl delete -f delete_obj.json
Error from server (BadRequest): error when deleting "delete_obj.json": admission webhook "rancher.cattle.io.projects.management.cattle.io" denied the request: System Project cannot be deleted


According the webhook code

func (a *admitter) admitDelete(project *v3.Project) (*admissionv1.AdmissionResponse, error) {
	if project.Labels[systemProjectLabel] == "true" {
		return admission.ResponseBadRequest("System Project cannot be deleted"), nil
	}
	return admission.ResponseAllowed(), nil
}


Edit the object and remove the label

kubectl edit -n c-96kcw project.management.cattle.io/p-fsl78

remove this line

"authz.management.cattle.io/system-project": "true"

Exemple

{
    "apiVersion": "management.cattle.io/v3",
    "kind": "Project",
    "metadata": {
        "annotations": {
            "authz.management.cattle.io/creator-role-bindings": "{\"created\":[\"project-owner\"],\"required\":[\"project-owner\"]}",
            "field.cattle.io/creatorId": "user-ffk59",
            "field.cattle.io/systemImageVersion": "{\"alerting\":\"system-library-rancher-monitoring-initializing\",\"logging\":\"system-library-rancher-logging-initializing\",\"pipeline\":\"08e535a\"}",
            "lifecycle.cattle.io/create.mgmt-project-rbac-remove": "true",
            "lifecycle.cattle.io/create.project-namespace-auth_c-96kcw": "true"
        },
        "creationTimestamp": "2023-01-05T10:24:39Z",
        "generateName": "p-",
        "generation": 7,
        "labels": {
            "authz.management.cattle.io/system-project": "true",   <----------- remove this line
            "cattle.io/creator": "norman"
        },
        "name": "p-fsl78",
        "namespace": "c-96kcw",
        "resourceVersion": "267529710",
        "uid": "6ffa76d2-b16d-4ae4-822e-78174802bef6"
    },
The skills of a DevOps

The DevOps engineer has a transversal role that requires a good mastery of the IT development stages, as well as a good understanding of the challenges of continuous deployment and production. The DevOps job requires the mastery of various skills. First of all, it is necessary to master the technical skills that the job requires. The DevOps consultant must therefore

  • know how to develop scripts and do integration
  • use construction and virtualisation tools: Docker, Kubernetes, etc
  • know how to set up continuous integration chains (CI/CD)
  • know the operating system environment: Linux, Windows systems
  • mastering automated testing or deployment monitoring tools
  • be meticulous about data security and have excellent knowledge of server systems
  • work on cloud platforms (AWS, Azure, GCP, etc.) and others as well as on On-Premises platforms

In addition to technical skills, the DevOps engineer must be able to evaluate the functioning of applications, make technical adjustments and measure the performance of the solutions developed.

While technical mastery is crucial, the human qualities of the DevOps consultant or engineer represent a major asset in their relations with other teams and the hierarchy. In addition to management skills, they must be able to listen to the demands of the client and the teams. It is therefore essential that he/she has good interpersonal skills to better understand the needs and to exchange information more easily:

  • he must be able to manage and direct the teams with which he collaborates
  • he must always have a certain distance from the project in order to carry it out successfully and respect the objectives set
  • he must be able to formulate requests in technical language
  • He must be able to federate all the participants in order to develop a personalised and coherent solution

Not every DevOps engineer masters all programming languages, especially novices. A good engineer must therefore have the ability to quickly learn deployment tools or technologies so that the company can succeed in its digital transformation.

Moreover, the company that has to recruit a DevOps engineer or call on a DevOps consultant will focus in particular on DevOps practices. In other words, it will pay particular attention to the work processes of the person in question. The latter will need to be familiar with various cloud computing solution providers. Finally, a good DevOps engineer must regularly monitor technological developments to remain at the forefront of his field. He must be on the lookout for new languages and new digital tools.

PHP Framework

The question that many people ask when you have chosen pHp as your language.

There are several frameworks, I will present the ones I use but others are just as valid depending on your needs. Symfony and Laravel are very complete, others are simpler to set up and for small teams.

Comparison of the two frameworks.

Twig vs Blade HTML engines

  • Lavravel proposes to write .blade.php files which are PHP files with some additional methods to simplify your life such as @foreach or @extends. The code inside the Blade tags must first and foremost be PHP code
  • Symfony uses the Twig template engine which offers a completely different syntax from PHP. For example, the item.name object in the $item.name link can be extended using extensions. (You can’t call any PHP class from the template)

Twig is a more powerful engine but is also restrictive in requiring the use of extensions to add functionality. This approach also makes templates more easily editable by front-end developers (the syntax is simpler and looks like JavaScript).

Balde is easier to handle because it allows to write PHP, with the risk sometimes to see calls to models directly in the views.

ORM Doctrine vs Eloquent

  • Laravel uses by default Eloquent which is an ORM based on Active Record where the Model is both responsible for representing an entity, but also for managing the persistence of information
  • Symfony uses by default Doctrine which is an ORM based on the Data Mapper principle where we separate the notion of entity (object representing the data), Repository (object used to manipulate the entities) and Manager (object responsible for the persistence)

Eloquent has a more natural and logical syntax but this apparent simplicity can quickly lead to “fat models” because all the logic will be stored in the same place.

Doctrine naturally allows a better separation but will be relatively verbose for simple cases.

FormBuilder vs FormRequest form management

  • Symfony allows you to create a class that will manage forms, from their creation to data processing. The form will be able to hydrate an entity from the received data
  • Laravel simply offers a particular type of Request that allows to check and process the data received during a request. It will then be necessary to manually process the data and modify the model accordingly

Adding a Bundle module vs ServiceProvider

  • Symfony is known to have a bundle system allowing the addition of extra functionality simply with a good code separation
  • Laravel does not have such a system but it is possible to limit the use of ServiceProviders that have a boot() method. It is thus possible to create a library in a separate namespace and to include a logic when importing the ServiceProvider

Module integration by composing works in both frameworks.

In the end

Laravel focuses on the simplicity of the code for the developer which can sometimes lead to bad practices when you want to take shortcuts. But with a little rigour you will have clean code and good code organisation. The use of a Service Container allows to manage dependency injection and to ensure that the code remains easily testable.

Symfony requires more rigour and is more complex to learn. It has a slightly longer learning curve but has the advantage of imposing more restrictions. Once past the learning phase and the discovery of the different bundles provided by the Framework is as productive as with Laravel.

The choice depends on your affinity for the method used.

Continuous Integration, Delivery

CI

CI is the part where the workflow has to validate each developer commit automatically.

The automatic unit and functional tests are validated by the integration machine e.g. GitLab CI or CircleCi. Once the tests are passed a build/artefact is produced and will be deployed in the TEST environment.

To arrive at a build/artefact

  • The code is built at each commit
  • The code is automatically unit tested at each commit
  • Everyone has access to the build and test report
  • Tests are run on a scaled down version of the production environment
  • Deliverables are stored in a version-controlled artefact repository
  • Deliverables are automatically deployed to a test environment after a successful build


If any of the steps fail then the developer responsible for the commit receives a notification to correct as soon as possible.

The implementation of a CI and all the processes to be integrated

The different types of tests

  • Unit tests are used to test methods or functions in the same way as for the development of a product.
  • Integration tests are to ensure that several components behave correctly together, they are for functional regressions
  • Acceptance tests similar to integration but focused on the business
  • User interface tests are for ensuring that from the user’s point of view the interface actions work
the more the tests are based on the UI layer, the more time they take to implement and maintain, so they are expensive.

To adopt continuous integration, you will need to run your tests on every branch you push.

To do this some simple questions:

  • where is the code hosted? Restriction…
  • what operating system and resources do we need for the application? Dependencies…
  • how many resources do we need for your tests?
  • is the application monolithic or microservice?
  • do you use containers? Docker…

Test coverage & complexity

It’s good to aim for over 80% coverage but be careful not to confuse a high coverage percentage with a good test suite. A code coverage tool helps us find untested code. The quality of your tests will make the difference at the end of the day.
A tool like SonarQube is there to help make decisions when the code is complex and untested.

Duplication and dead code

Duplicated code will be the future dead code or the future double-fixed bug! It is very important to check your code and reduce duplication to a maximum of 5% duplicated code on a large or legacy project is acceptable but try to be under 2% for any project starting with quality code metrics.

Refactoring

If you are about to make significant changes to your application that do not have sufficient test coverage, you should start by writing acceptance tests around the features that might be impacted. This will provide a safety net to ensure that the original behaviour has not been affected after refactoring or adding new features.

The environment

The entire IT Dev/DevOps/Admin team should keep in mind to keep the same environment everywhere. The revision number of all components used by the application should be the same in Dev/Build/Test/Integration/Prod. This is where containers (Docker) and orchestrators (Kubernetes) are useful.

Mindset

If a developer breaks the CI workflow, fixing it becomes the top priority.

To write good tests, you’ll need to ensure that developers are involved and have access to a code analysis tool.

Whether you have an existing code base or are just starting out, it is certain that bugs will occur as part of your releases. Be sure to add tests when you resolve them to prevent them from recurring.

CD

The deployment of the application is managed by the code. The code describes exactly what the application needs to start and run. The artefact and environment will be the same between the Test/Integration/Production systems because the image is generated only once by the CI.

Continuous Delivery

After automating the creation and unit and integration testing in continuous integration, continuous delivery automates the publication of validated code in a registry/repository. Therefore, to ensure the effectiveness of the continuous delivery process, the continuous delivery process must first be introduced into the development pipeline. Continuous delivery ensures that the code base is always ready to be deployed in a production environment.

In continuous delivery, each step (from merging code changes to distributing production-ready versions) involves automating the code testing and release processes. At the end of this process, the operations team is able to easily and quickly deploy an application to a production environment.

Continuous Deployment

The final stage of a mature CI/CD pipeline is continuous deployment. In addition to the continuous delivery process, which automates the release of a production-ready version to a code repository, continuous deployment automates the launch of an application into a production environment. In the absence of a manual bridge between production and the previous stage of the pipeline, continuous deployment depends mainly on the design of the automation of the testing processes.

In practice, under continuous deployment, a change made by a developer to an application could be released within minutes of the code in question being written (assuming it passes the automated tests). This makes it much easier to receive and integrate user feedback on a continuous basis. Together, these three CI/CD practices reduce the risks associated with application deployment, since it is easier to release changes in small increments than in one block. However, this approach requires considerable upfront investment, as automated tests will need to be written to accommodate a wide range of testing and release stages in the CI/CD pipeline.

IAM

Identity and Access Management (IAM) is a business process infrastructure for managing electronic or digital identities.

This infrastructure includes the organisational rules for managing digital identities, as well as the technologies needed to support this management.

With IAM technologies, IT managers can control user access to their organisations’ critical information. IAM products provide role-based access control, which allows administrators to regulate access to systems and networks based on each user’s role in the organisation.

In this context, access refers to a user’s ability to perform a specific task, such as viewing, creating or modifying a file. Roles are defined according to competence, authority and responsibility within the organisation.

Systems used for identity and access management include single sign-on (SSO), multi-factor authentication and access management. These technologies also provide the ability to securely store identity and profile data, as well as data governance features to ensure that only necessary and relevant data is shared.

These products can be deployed on-premises, delivered by an external provider via a cloud subscription model or deployed in a hybrid cloud.

Identity and access management functionality requirements

Identity and access management systems must include all the controls and tools needed to capture and store user login information, administer the enterprise database of user identities, and manage the assignment and removal of access privileges. The aim is to provide a centralised directory service that can both monitor and track all aspects of the enterprise user base.

In addition, IAM technologies should simplify the process of provisioning and configuring user accounts, including reducing the time it takes to complete these processes through a controlled workflow that minimises the risk of error and misuse, and allowing automated processing of accounts. Administrators must also be able to view and change access rights instantly.

Identity and access management systems must also strike the right balance between speed and automation of their processes and the control given to administrators to manage and modify access rights. Therefore, to manage access requests, the centralised directory requires an access rights system that automatically associates job titles, entity identifiers and employee locations with the relevant privilege levels.

Multiple levels of analysis can be included in the form of workflows to validate each request. In this way, the configuration of appropriate control processes and the review of existing rights are simplified, so as to avoid privilege proliferation, i.e. the gradual accumulation of access rights that exceed the needs of users in the course of their work.

Finally, IAM systems should provide flexibility in the creation of groups with specific privileges for specific roles, in order to consistently assign access rights in relation to employees’ functions. It is also about providing request and approval processes for changing privileges, as employees with the same responsibilities and working in the same location may need slightly different and therefore customised access.

Benefits of identity and access management

IAM technologies can be used to set up, capture, record and manage user identities and associated permissions in an automated manner, ensuring that access privileges are granted according to a single rule interpretation and that all users and services are properly authenticated, authorised and verified.

By properly managing identities, organisations can better control user access and reduce the risk of internal and external data breaches.

Automated IAM systems increase efficiency by reducing the amount of effort, time and money spent on managing access to their networks, either manually or through individual access controls that are not linked to centralized management systems.

Using a common platform for identity and access management allows the same security policies to be applied across the various devices and operating systems used by the organisation. From a security perspective, the use of an IAM infrastructure can facilitate the enforcement of user authentication, validation and authorisation policies, as well as address privilege proliferation issues.

Implementing identity and access management tools in accordance with associated best practices can provide a competitive advantage.

For example, IAM technologies enable the organisation to grant external users (customers, partners, contractors and suppliers) access to its network through mobile, on-premise and on-demand applications without compromising security. This increases collaboration, productivity and efficiency and reduces operating costs.

On the other hand, poorly controlled identity and access management processes can lead to regulatory non-compliance, because in the event of an audit, business leaders will have difficulty proving that company data is not at risk of misuse.

IAM systems help companies to better comply with legislation by enabling them to show that their data is not being misused. In addition, with these tools, companies can demonstrate that they are able to make data available for audit on demand.

Business benefits of IAM

It can be difficult to get a budget for IAM projects, as they do not directly translate into profitability or operational gains. However, ineffective identity and access management carries significant risks, both in terms of compliance and the overall security of the organisation. Indeed, such poor management increases the likelihood of significant damage should external and internal threats materialise.

Administration processes have always been necessary to ensure the smooth flow of business data while managing access. However, as the business IT environment has evolved, so have the challenges, including destabilising new trends such as the use of personal devices (BYOD), cloud computing, mobile applications and increasing employee mobility. There are more devices and services to manage than ever before, with varying requirements for access privileges.

K8S & OUTSOURCING

Kubernetes is an open source container orchestrator created by Google in 2014, it has become an indispensable tool for Cloud ready applications, designed to be hosted in the cloud or hybrid!

Its advantages are the following:

  • Cloud-Native design: Kubernetes encourages the implementation of microservices and distributed architectures, which increases the agility, resilience to failure, and scalability of the application
  • Portability: Kubernetes works the same way, using the same image and configurations, regardless of the cloud provider (AWS, Azure, GCP, etc) or Vmware
  • Deployment automation: Kubernetes allows you to model an application in containers and automate its deployment
  • open source: Kubernetes is an open source project with a large community

OUTSOURCING AND KUBERNTEST MANAGED SERVICES

In an approach of agility, reliability and security, I propose a managed service of Kubernetes platform can import the host even at your place.

I also offer a managed service to provide users with total flexibility in their Kubernetes infrastructure with managed services in a totally transparent way.

You own your platform! All work is clearly and simply documented for easy transfer.

I offer a Rancher2 interface for all clusters to be managed.

Services offered :

  • Creation of the Kubernetes cluster on your premises or completely outsourced
  • Maintenance in operational conditions
  • Preventive security maintenance
  • Backups of your applications
  • Supervision of the Kubernetes cluster and applications
  • Follow-up and personalised support for the migration of applications with Docker and Kubernetes technologies

Contract me to accompany you: