Monday, December 25, 2023

docker site goes down, due to ssl renewal error

sudo certbot -i nginx -d mqbakery.com -d *.mqbakery.com -a manual

sudo nginx -t

sudo systemctl reload nginx


 or

 

docker stop docker-webserver-1
docker start docker-webserver-1

NB- When you add acme challenge in godaddy, don't forget to remove the domain name from txt name. eg:_acmechallenge.mqbakery.com may be written as only acme_challenge


Saturday, December 23, 2023

Steps to Install Python

 sudo apt update
sudo apt install python3

Verify Installation:

python3 --version

python3
 

sudo apt install python3-pip

pip3 --version

Interpreter - 

Python3 filename.py



 

Google Analytics

https://support.google.com/analytics/answer/9304153?hl=en


https://analytics.google.com/analytics/web/#/p213025502/reports/intelligenthome


https://www.youtube.com/watch?v=QmEOPuJr05w&list=PLI5YfMzCfRtZ4bHJJDl_IJejxMwZFiBwz&index=4




Setting up a Google Analytics account involves a few steps. Follow these step-by-step instructions:

  1. Create a Google Account: If you don't have a Google Account, you'll need to create one. Visit the Google Account creation page and follow the instructions.

  2. Sign in to Google Analytics: Once you have a Google Account, go to the https://analytics.google.com/analytics/web/#/ and sign in with your Google Account credentials.

  3. Set Up a New Google Analytics Account:

    • Click on the "Start for free" button.
    • Fill in your Account Name, which is typically the name of your business or website.
    • Choose what you want to measure (select "Web" for a website).
  4. Set Up a Property:

    • Enter the Website Name and Website URL.
    • Choose the industry category that best describes your business.
    • Select the reporting time zone.
  5. Configure Data Sharing Settings:

    • Decide whether you want to share data with Google and its partners. Adjust the settings according to your preference.
  6. Create a Google Analytics 4 Property:

    • You will be prompted to create a Universal Analytics property or a Google Analytics 4 property. It's recommended to create a Google Analytics 4 property, as it is the latest version of Google Analytics.
  7. Agree to Terms and Create:

    • Read and accept the terms of service.
    • Click on the "Create" button to create your Google Analytics account and property.
  8. Install the Tracking Code:

    • After creating your property, you will be given a tracking code (a unique snippet of code). Copy this code.
    • Paste the tracking code into the HTML of every page on your website, just before the closing </head> tag.
  9. Verify Tracking Installation:

    • After installing the tracking code, go back to your Google Analytics account.
    • Click on "Home" in the left-hand menu.
    • Look for the Realtime section, and if your tracking is set up correctly, you should see data in the Realtime report.

  1. Explore Google Analytics:

  • Once your account is set up and the tracking code is installed, explore the various reports and features available in Google Analytics. You can find information on user behavior, website traffic, and more.

Remember that it may take some time for data to appear in your reports, so be patient. Regularly check your Google Analytics account to gain insights into your website's performance and user behavior.

Below is the Google tag for this account. Copy and paste it in the code of every page of your website, immediately after the <head> element. Don’t add more than one Google tag to each page.

Wednesday, December 13, 2023

free courses available for Artificial Intelligence (AI)

 There are several excellent free courses available for Artificial Intelligence (AI) studies. Here are some of the best free courses for AI:

1. Stanford University's CS229: Machine Learning: This course is available for free online and covers various machine learning algorithms and techniques. It is widely regarded as one of the best resources for learning machine learning.

2. Stanford University's CS231n: Convolutional Neural Networks for Visual Recognition: This course focuses on deep learning techniques applied to computer vision. It covers topics such as CNNs, image classification, object detection, and more.

3. University of Helsinki's Elements of AI: This online course provides a beginner-friendly introduction to AI. It covers various AI concepts and applications, offering a comprehensive overview of the field.

4. Google's Machine Learning Crash Course: This free online course offers an introduction to machine learning concepts. It covers topics such as linear regression, classification, neural networks, and more.

5. Microsoft's Artificial Intelligence (AI) Ethics, Law, and Policy: This course explores the ethical implications of AI. It covers topics such as fairness, transparency, privacy, and bias in AI systems.

6. deeplearning.ai's Deep Learning Specialization: This specialization on Coursera offers the first course, "Neural Networks and Deep Learning," for free. It provides a comprehensive introduction to deep learning techniques.

7. Berkeley's Artificial Intelligence: This course, available on edX, covers various AI topics such as search algorithms, logic, planning, and more. It provides a solid foundation in AI principles.

8. Fast.ai's Practical Deep Learning for Coders: This course is designed to make deep learning accessible for coders. It covers practical aspects of deep learning and provides hands-on experience with real-world applications.

9. UC San Diego's Machine Learning Fundamentals: This course offers an overview of machine learning principles, algorithms, and applications. It covers topics such as regression, clustering, and reinforcement learning.

10. IBM's Applied AI: This course on Coursera explores various AI applications in industries like healthcare, finance, and cybersecurity. It offers insights into how AI is being used in real-world scenarios.

These free courses provide a solid foundation in AI and cover a range of topics including machine learning, deep learning, ethics, and applications. They are a great way to get started and gain knowledge in the field of AI without incurring any cost.

AI tools

To help you with coding

 

 https://www.blackbox.ai/

 

 

Friday, November 24, 2023

sudo apt-update throwing "connection timed out error"

 If you face this "connection timed out error"

$ sudo apt update 
  Could not connect to in.archive.ubuntu.com:80 (*2403:8940:ffff::f*), connection timed out Could not connect to in.archive.ubuntu.com:80 (103.97.84.254), connection timed out
Err:8 http://in.archive.ubuntu.com/ubuntu bionic-updates InRelease
  Unable to connect to in.archive.ubuntu.com:http:
Err:9 http://in.archive.ubuntu.com/ubuntu bionic-backports InRelease
  Unable to connect to in.archive.ubuntu.com:http:
------------------------------------------------------------------------
 
1)sudo nano /etc/apt/sources.list
2) replace all http://**xx**.archive.ubuntu.com/ubuntu... to http://archive.ubuntu.com/ubuntu ...
 
https://askubuntu.com/questions/1198621/apt-get-cannot-connect-to-ubuntu-archives  
 

Wednesday, November 22, 2023

Create a bootable USB stick on Ubuntu

 

Download the Ubuntu to your system

  1. Insert your USB stick (select ‘Do nothing’ if prompted by Ubuntu)
  2. On Ubuntu 18.04 and later, use the bottom left icon to open ‘Show Applications’
  3. In older versions of Ubuntu, use the top left icon to open the dash
  4. Use the search field to look for Startup Disk Creator
  5. Select Startup Disk Creator from the results to launch the application

REFERENCES:

https://ubuntu.com/tutorials/create-a-usb-stick-on-ubuntu#3-launch-startup-disk-creator

Saturday, November 11, 2023

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

 

See the current value of max-old-space-size (in MB)

 

node -e 'console.log(v8.getHeapStatistics().heap_size_limit/(1024*1024))'

 

 

That said, to increase the memory, in the terminal where you run your Node.js process:

 export NODE_OPTIONS="--max-old-space-size=8192"
 
Reference 
https://stackoverflow.com/questions/53230823/fatal-error-ineffective-mark-compacts-near-heap-limit-allocation-failed-javas 

Thursday, October 26, 2023

Test using REQBIN

 

 

1) Go to the website https://reqbin.com/curl


2) paste the curl and run to see validation errors

Wednesday, October 11, 2023

RESIZE AWS DISK SPACE

 

To increase the storage space of an Amazon EC2 instance, you can follow these general steps. Keep in mind that this process may vary slightly depending on the EC2 instance type and your specific requirements:

  1. Create a Backup (Snapshot): Before making any changes, it's crucial to create a backup (snapshot) of your EC2 instance's EBS (Elastic Block Store) volume. This ensures that you have a point-in-time copy of your instance's data that you can restore if anything goes wrong during the resizing process.

    To create a snapshot, go to the AWS Management Console, navigate to the "EC2" service, select "Volumes" from the left sidebar, find your instance's EBS volume, right-click it, and choose "Create Snapshot."

  2. Stop the EC2 Instance: To make changes to the instance's storage, you will typically need to stop it first. In the AWS Management Console, right-click your instance, and choose "Instance State" > "Stop."

  3. Resize the EBS Volume: After stopping the instance, you can resize the EBS volume attached to it. Here's how:

    a. In the AWS Management Console, go to "EC2" and navigate to the "Volumes" section.

    b. Find the EBS volume that is attached to your EC2 instance.

    c. Right-click the volume and choose "Modify Volume."

    d. In the "Modify Volume" dialog, increase the volume size to your desired capacity.

    e. Confirm the changes, and AWS will perform the resize operation. This process usually completes quickly, but it might take a few minutes.

  4. Start the EC2 Instance: Once the volume is resized, you can start your EC2 instance again. Right-click your instance in the AWS Management Console and choose "Instance State" > "Start."

  5. Resize the File System: After resizing the EBS volume, you'll need to resize the file system on the instance to make use of the additional storage space. This step depends on the operating system you're using:

    • For Linux: If you're using Linux, you can use the resize2fs command to resize the file system. For example, if you're using ext4, you can run:

      bash
      lsblk //to find out partition name xvda1
      df -hT //to find the type of partition 
      sudo growpart /dev/xvda 1
      sudo resize2fs /dev/xvda1
      
       
    • sudo resize2fs /dev/xvdf

      Replace /dev/xvdf with the appropriate device name for your instance.

    • For Windows: If you're using Windows, you can use the "Disk Management" tool to extend the partition to include the newly allocated space.

  1. Verify the Storage Increase: After performing these steps, verify that the storage space has increased. You can check the available disk space using commands like df (Linux) or in the Windows File Explorer.

Remember that these steps might vary slightly based on your specific EC2 instance type, region, and operating system. Always take precautions and create backups before making changes to your EC2 instance's storage to prevent data loss.

Wednesday, September 27, 2023

Set Up Firebase Integration

 

    • Go to the Firebase Console (https://console.firebase.google.com/) and create a new project or use an existing one.
    • Once the project is created, navigate to the "Project settings" and click on the "Cloud Messaging" tab to find your server key and sender ID. You'll need these for server-side integration.

     

    If server key is not available:

         

    1. Go to this link and enable: "Cloud Messaging" service:

    https://console.cloud.google.com/apis/api/googlecloudmessaging.googleapis.com

     

       2. In the project settings, add a web app (if you don't already have one) and download the Firebase configuration JSON file. You'll need this for server authentication.

     3. To get VAPID key, in the "cloud message" generate web push certificate key pair

     

     

     

     

     

     

     

     

     

     

     

     

     

    References: https://stackoverflow.com/questions/37427709/firebase-messaging-where-to-get-server-key

Saturday, August 26, 2023

DOCKER

REFERENCES:

https://www.freecodecamp.org/news/where-are-docker-images-stored-docker-container-paths-explained/

 

1)  The output contains information about your storage driver and your docker root directory

$ docker info 
 
2) A Docker container consists of network settings, volumes, and images.
 
 Ubuntu: /var/lib/docker/
 
3) Docker images
  Docker images are stored in /var/lib/docker/overlay2
 
4) Docker Volumes
 
$ docker run --name nginx_container -v /var/log nginx 
 
5)Clean up space used by Docker
$ docker system prune -a 
 
6) To get container size and container info
$ docker ps -s
$ docker ps -a 
 
7) pull down an image from a repository, 
docker pull 
create a container from an image that does not yet exist locally, each layer is
pulled down separately, and stored in Docker's local storage area, which is
usually /var/lib/docker/ 
 
8) Check out the sizes of the images:
docker image ls
9) create a volume
docker volume create my-vol 
10) List Volumes
docker volume ls

11) Inspect volume
docker volume inspect my-vol
 12) To backup volume
 docker run --volumes-from --container_name-- -v $(pwd):--path-- ubuntu tar cvf --path--/storage.tar /app/storage/app/public
13) To restore volume
docker run --rm --volumes-from --container_name-- -v $(pwd):--path-- bash -c "cd storage/app/public && tar xvf --path--/storage.tar" 
14) To restore volume 
docker exec -it docker-webserver-1 /bin/bash 
 
TTo Set up ssl certificate:

docker compose run --rm certbot certonly --manual --preferred-challenges dns -d example.com -d *.example.com -v
 
docker ps
docker restart docker-name
docker system df
 df -h
docker builder prune
docker exec -it docker-webserver-1 /bin/sh
docker logs docker-webserver-1
Dangling images are images that are not associated with any running containers and are not tagged with a specific version or name. These images are typically left behind when you build or pull new images, and older, unused versions are no longer needed. They can accumulate over time and consume disk space. 
docker image prune
docker container prune 

Thursday, June 15, 2023

Report Generation for SonarCloud using Bitegarden Report

 

 

 

java -Dsonar.token=SONAR_TOKEN -Dlicense.file=/home/mysupply/Downloads/bitegarden-sonarcloud-report-1.3.4/LICENSE.txt -Dsonar.projectKey=examplecode1_phpbackend -Dsonar.organizationKey=examplecode1  -Dreport.type=0 -jar bitegarden-sonarcloud-report-1.3.4.jar

Wednesday, June 14, 2023

ERROR for site owner: Invalid domain for site key google recaptcha

 

If you encounter the error message "Invalid domain for site key" when working with Google reCAPTCHA, it typically means that the site key you are using is not associated with the domain or website where you are trying to implement reCAPTCHA.

To resolve this issue, you need to ensure that you have registered the correct domain or website with Google reCAPTCHA and obtained the appropriate site key. Follow these steps to fix the problem:

  1. Go to the Google reCAPTCHA admin console: https://www.google.com/recaptcha/admin.
  2. Sign in with the Google account that is associated with the reCAPTCHA implementation.
  3. If you haven't already done so, register your domain or website by clicking on the "+ Add" button under the "Register a new site" section.
  4. Provide the necessary information, including the label for your site and the domain name(s) where reCAPTCHA will be used.
  5. Once you have registered your site, you will receive a site key and a secret key.
  6. Ensure that you are using the correct site key for the specific domain or website where you are implementing reCAPTCHA. The site key should match the domain you specified during registration.
  7. Update your website's code with the correct site key. Double-check that you have entered the site key accurately and without any additional spaces or characters.
  8. Save the changes and deploy the updated code to your website.

By following these steps and using the correct site key associated with the domain or website, you should be able to resolve the "Invalid domain for site key" error.

 

 

Add the site key to 

(search for recaptcha)

clients: supplier portal , admin portal , user portal (to their env.docker files)

Add secret key to

server: laravel back-end

 

Tuesday, May 23, 2023

Steps to setup Google Recaptcha

 To implement Google reCAPTCHA on your website, follow these general steps:

    Sign up for reCAPTCHA: Go to the reCAPTCHA website (https://www.google.com/recaptcha) and sign in with your Google account. You will need to register your website and obtain API keys.

    Choose reCAPTCHA type: Decide whether you want to use reCAPTCHA v2 or reCAPTCHA v3. reCAPTCHA v2 provides the traditional "I'm not a robot" checkbox, while reCAPTCHA v3 is a background verification system that assigns a score to user interactions.

    Obtain API keys: Once you have registered your website, you will receive a site key and a secret key. These keys will be used to integrate reCAPTCHA into your website.

    Add reCAPTCHA script to your HTML: Insert the reCAPTCHA script tag into the <head> section of your HTML file. This script is provided by Google and loads the reCAPTCHA functionality on your web page.

    Add reCAPTCHA widget: Place the reCAPTCHA widget where you want it to appear on your website. For reCAPTCHA v2, this is typically a checkbox element. For reCAPTCHA v3, you will need to add a script to initiate the verification process.

    Verify the response: When a user submits a form or performs an action, you need to verify the reCAPTCHA response on your server-side code. Send the response token to Google's reCAPTCHA API for verification using your secret key.

    Handle the verification response: After submitting the reCAPTCHA response to Google, you will receive a verification response. Based on this response, you can determine if the user is a human or a bot and proceed accordingly.

The implementation details can vary depending on your programming language and framework. Google provides detailed documentation and code examples for various platforms, which you can refer to for specific implementation steps.

Remember to handle the verification on the server-side, as client-side verification can be bypassed by malicious users.
 

Other references:

https://www.freecodecamp.org/news/how-to-setup-recaptcha-v3-in-laravel/

Facing issues in Microsoft Azure Portal, Office.com??

 1)To fix issues, create a new global admin user temporarily

Steps: i) Login to office.com(Microsoft 365 account)

           ii) Select 'Admin' from the top left(::- App Launcher)

           iii) Go to 'Users' section in the left side bar

            iv) Sielect 'Active Users' from the sub menu

             v) Use 'Add User' option to create new user with 'Global Admin' prevelege.

             vi)Assign pwd, ensure to remember them

              vii)Enable MFA if required

              viii) Assign all unpaid licenses.

    After creating, do the required task, like verifying publisher or enabling MFA.

Once the task is done you may delete the test user

         

Tuesday, May 9, 2023

Become MICROSOFT verified publisher

References: 

https://learn.microsoft.com/en-us/azure/active-directory/develop/mark-app-as-publisher-verified
https://portal.azure.com/#view/Microsoft_AAD_RegisteredApps/ApplicationsListBlade

 

If you are already enrolled in the Microsoft Partner Network (MPN)(Microsoft partner center) and have met the pre-requisites, you can get started right away:

    1)Sign into the Azure Portal, Search 'App Registration' portal 

    2)Choose an app and click 'Branding & properties' from the left side menu.

    3)Click Add MPN ID to verify publisher and review the listed requirements.

    4)Enter your MPN ID and click Verify and save.

 

 

Other Portals you might require for MFA and other tasks:

 

1)Azure active Directory

2) Azure AD Identity  (Risky users issue)

 

 


Wednesday, April 12, 2023

How to host godaddy domain in AWS S3

 https://medium.com/tensult/aws-hosting-static-website-on-s3-using-a-custom-domain-cd2782758b2c

Thursday, April 6, 2023

how to host in aws s3 for free

 

AWS S3 offers a free tier that includes 5GB of storage, 20,000 GET requests, 2,000 PUT requests, and 15GB of data transfer out per month for the first 12 months. To host a website in S3 for free, you can follow these steps:

  1. Create an AWS account if you haven't already done so.
  2. Navigate to the S3 console and create a new bucket with a globally unique name. Note that the name you choose will be part of the website URL, so it should be something easy to remember and type.
  3. Select the newly created bucket and open the Properties tab. Under the "Static website hosting" section, choose "Use this bucket to host a website" and enter the name of the index document (e.g., index.html).
  4. Upload your website files to the bucket. You can do this by selecting the bucket, clicking on the "Upload" button, and following the prompts to select your files.
  5. Once your files are uploaded, select the bucket again and open the Permissions tab. Under the "Bucket policy" section, click on "Edit" and paste in the following policy:

{
  "Version":"2012-10-17",
  "Statement":[{
    "Sid":"PublicReadGetObject",
        "Effect":"Allow",
      "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::your-bucket-name/*"
      ]
    }
  ]
}
 

 Error:

You either don’t have permissions to edit the bucket policy, or your bucket policy grants a level of public access that conflicts with your Block Public Access settings. To edit a bucket policy, you need s3:PutBucketPolicy permissions. To review which Block Public Access settings are turned on, view your account and bucket settings. Learn more about Identity and access management in Amazon S3

 

--will need to switch off the bucket policy 

  1. Create a new Route 53 hosted zone or use an existing one to create a DNS record that maps your domain name to the S3 bucket website endpoint.
  2. In Route 53, create a new record set with the following configuration:
    • Name: The name of your domain, such as example.com.
    • Type: Select "A - IPv4 address".
    • Alias: Select "Yes" and choose your S3 bucket website endpoint from the dropdown list.
  3. Save your changes in Route 53 and wait for the DNS changes to propagate, which can take up to 48 hours.

Once the DNS changes have propagated, you should be able to access your S3 bucket website at the specified domain name.

Note that some additional configuration may be required depending on your specific use case, such as configuring SSL/TLS encryption, setting up redirects, or restricting access to your S3 bucket.

-Click on the bucket to see the objects

-Click on the object to get the url, use the url to load the browser



When setting up a custom domain for an AWS S3 bucket, you generally do not need to enter an IP address. Instead, you can create a CNAME record that points to the S3 bucket's endpoint.

For example, to set up a custom domain for an S3 bucket located in the eu-central-1 region, you can follow these steps:

  1. In your AWS S3 console, select the S3 bucket you want to use for your website.

  2. Click on the "Properties" tab and then click on the "Static website hosting" option.

  3. In the "Static website hosting" page, note down the S3 bucket endpoint URL, which should be in the format of bucketname.s3-website-region.amazonaws.com.

  4. Go to your GoDaddy account, select the domain you want to use for your website, and click on "Manage DNS".

  5. Create a new CNAME record and enter your desired subdomain in the "Name" field (e.g. www), and then enter the S3 bucket endpoint URL in the "Points to" field.

For example, for an S3 bucket with endpoint mywebsite.s3-website.eu-central-1.amazonaws.com, you can create a CNAME record with "www" as the name and mywebsite.s3-website.eu-central-1.amazonaws.com as the value.

  1. Save the changes and wait for the DNS records to propagate, which can take some time.

Sunday, March 26, 2023

Purchase Amazon EC2 Reserved Instances

 

Purchase RIs using the AWS Management Console

  1. Log in to the AWS Management Console.
  2. In the Amazon Web Services menu choose “EC2”.
  3. In the left navigation pane, choose “Reserved Instances”.
  4. Choose “Purchase Reserved Instances”.
  5. Select your Reserved Instance type, platform, payment option, instance type, offering class, and term length. Optionally, check the "Only show offerings that reserve capacity" box to select an Availability Zone, if you want to reserve capacity.
  6. Adjust the quantity of instances to purchase and ensure you are comfortable with the price quoted.
  7. Confirm your purchase.

Important notes about purchases

  • If your needs change, you can modify or exchange reserved instances, or list eligible Standard Reserved Instances for sale on the Reserved Instance Marketplace. 
  • You can purchase up to 20 Reserved Instances per Availability Zone each month. If you need additional Reserved Instances, complete this form.
  • Purchases of Reserved Instances are non-refundable.
  • If you purchase a Reserved Instance from a third- party seller, we will share your city, state, and zip code with the seller for tax purposes. If you don’t wish to purchase from a 3rd party seller, please make sure to select a Reserved Instance with “AWS” listed as the seller in the console purchasing screen.

 

 

References:

 https://aws.amazon.com/ec2/pricing/reserved-instances/buyer/

Tuesday, March 14, 2023

Websites that give us free templates

 There are many websites that offer free static templates for websites. Here are a few popular options:

    FreeHTML5.co
    HTML5 UP
    Colorlib
    Templated
    Free CSS
    W3Layouts
    Start Bootstrap
    BootstrapMade
    Themezy
    OS Templates

Note that some of these sites may offer both free and premium templates, so be sure to double-check before downloading to make sure it's the right option for you.

Tuesday, March 7, 2023

How to allow emails from all servers,

To fix the issue with the SPF record in GoDaddy, you can follow these steps:

    Log in to your GoDaddy account and go to the DNS management page for your domain.

    Locate the existing SPF record for your domain, which should be in the TXT record type.

    Edit the existing SPF record by appending "+ip4:209.59.154.50" to it, which authorizes the server to send mail as well. The new SPF record should look something like this:

    v=spf1 include:spf.protection.outlook.com +ip4:209.59.154.50 ~all

    Save the changes to the DNS records.

    Wait for the changes to propagate, which may take up to 48 hours.

After making these changes, the server's IP address should be authorized to send mail for your domain, which should prevent messages from being discarded or sent to spam folders.

Wednesday, January 18, 2023

Lumen Back-end , Angularjs, Vuejs front-end AWS deployment instructions

https://www.digitalocean.com/community/tutorials/how-to-install-linux-apache-mysql-php-lamp-stack-on-ubuntu-20-04


sudo apt update
sudo apt install apache2

--------no need to do this----------
sudo ufw status
sudo ufw app list

sudo ufw allow 'Apache Full'
press 'y' and enter continue'

sudo ufw enable
-------------------------------
check: http://server ip

-------------------------------------------

install mysql

    $ sudo apt install mysql-server

switch its authentication method from auth_socket to mysql_native_password

        sudo mysql
        SELECT user,authentication_string,plugin,host FROM mysql.user;


         ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'fabrica@mysupply';
        FLUSH PRIVILEGES;
        TEST
        SELECT user,authentication_string,plugin,host FROM mysql.user;
        exit

 
    $ sudo mysql_secure_installation

--VALIDATE PASSWORD PLUGIN.-NO
REST- YES
---------------------------------------------------

Install PHP8.1

---------------------------------------------------------------------------------------------------

Add PPA for PHP 8.1
    sudo apt install software-properties-common

    sudo add-apt-repository ppa:ondrej/php

    sudo apt-get update

    sudo apt install php8.1

    sudo apt install php8.1-common php8.1-mysql php8.1-xml php8.1-xmlrpc php8.1-curl php8.1-gd php8.1-imagick php8.1-cli php8.1-dev php8.1-imap php8.1-mbstring php8.1-opcache php8.1-soap php8.1-zip php8.1-intl -y
    
    
    
    ----------------------------------------------------------------------------------------------
INSTALL MONGODB in ubuntu 20, php8.1

https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-ubuntu/?_ga=2.242670018.1950484347.1673427386-451534989.1673427383

sudo apt remove mongodb-org*
sudo apt autoremove mongodb-org*
sudo apt purge mongodb-org*



wget -qO - https://www.mongodb.org/static/pgp/server-6.0.asc | sudo apt-key add -
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list
sudo apt-get update
sudo apt-get install -y mongodb-org
sudo systemctl start mongod
sudo systemctl daemon-reload
sudo systemctl status mongod



sudo apt-get install php8.1-mongodb

how to work with mongodb

sudo systemctl start mongod
mongosh
show databases;
use testing_api
db.dropDatabase()
//db.logs.remove({});//deprecated
show collections


-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------



cd /etc/apache2/sites-available

Step 2 — Modify the new api.domain.work.conf file

    -
<VirtualHost *:80>
ServerName api.domain.work
ServerAdmin admin@domain.work
DocumentRoot /var/www/html/api/public
<Directory /var/www/html/api>
AllowOverride All
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
RewriteEngine on
RewriteCond %{SERVER_NAME} =api.domain.work
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>


--------------------------
<VirtualHost *:80>
ServerName domain.work
ServerAdmin admin@domain.work
DocumentRoot /var/www/html/user
<Directory /var/www/html/user>
AllowOverride All
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
RewriteEngine on
RewriteCond %{SERVER_NAME} =domain.work
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>
-------------------------------------------------------------------------------------
<VirtualHost *:80>
ServerName dashboard.domain.com
ServerAdmin admin@domain.com
DocumentRoot /var/www/html/dashboard.domain.com
<Directory /var/www/html/dashboard.domain.com>
AllowOverride All
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
RewriteEngine on
RewriteCond %{SERVER_NAME} =dashboard.domain.com
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>
---------------------------------------------------------------------------------------
set up domain name in godaddy
set up ssl using certbot:

$ sudo snap install --classic certbot


$ sudo certbot --apache -d domain.solutions -d www.domain.solutions
dashboard.medicalfactory.org
$ sudo certbot --apache -d supplier.domain.org -d supplier.domain.org
$ systemctl  restart apache2
mongo
sudo a2ensite api.domain.com.conf
sudo apachectl configtest
-------------------------------------------------------
-to enable the site
        sudo a2ensite <directoryname>
    -to test for syntax errors
        sudo apachectl configtest
-syntax ok

    -restart apache services
        sudo systemctl restart apache2
---------------------------------------------------------------------
install swap space

https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-20-04#step-6-tuning-your-swap-settings



sudo swapon --show
free -h
df -h
sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
ls -lh /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo swapon --show
free -h
sudo cp /etc/fstab /etc/fstab.bak
sudo nano /etc/fstab
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
cat /proc/sys/vm/swappiness
sudo sysctl vm.swappiness=10
sudo nano/etc/sysctl.conf
at the bottom add vm.swappiness=10
vm.vfs_cache_pressure=50

sudo sysctl vm.vfs_cache_pressure=50
cat /proc/sys/vm/vfs_cache_pressure





-------------------------------------------------------------------------

download tar.gz newapi backend files from gitlab
take backup of humhum.work storage and DB and upload to server
copy .env file
install composer ---https://getcomposer.org/download/
Steps to install a composer:

1. sudo apt install wget php-cli php-zip unzip
2. php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
    1. php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
    2. HASH="$(wget -q -O - https://composer.github.io/installer.sig)"
    3. php -r "if (hash_file('SHA384', 'composer-setup.php') === '$HASH') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
Out put: Installer verified
    4. sudo php composer-setup.php --install-dir=/usr/local/bin --filename=composer
Out put: All settings correct for using Composer
Downloading...
Composer (version 2.0.14) successfully installed to: /usr/local/bin/composer
Use it: php /usr/local/bin/composer


--------------------------------------------------------------------------
sudo api.domain.com

cp .env.prod .env
copy public/.htaccess
copy storage

sudo chmod 777 -R storage
sudo chmod 777 -R storage
sudo ln -s /var/www/html/api.medicalfactory.org/storage/app/public /var/www/html/api.medicalfactory.org/public/storage
sudo ln -s /var/www/html/api.sqdesignz.com/storage/app/public /var/www/html/api.sqdesignz.com/public/storage

sudo chown www-data:www-data -R storage

composer install
----showing php extension errors------

sudo a2enmod php8.1

systemctl restart apache2

$ sudo update-alternatives --set php /usr/bin/php8.1
-----------------------------------------
mysql -u root -p;
fabrica@mysupply
create database api;

/var/www/html/newapi/database/seeders/DatabaseSeeder.php , uncomment all seeders
please comment this line
Permission::truncate();

php artisan migrate --seed
---------------------------------------------------------------------------------
sudo nano public/.htaccess


cron job automation documentation

----------------------------------------------------------------------------------------
Front end: supplier portal angularjs

# Build Instructions (node version 16)

sudo rm -rf node
sudo npm install

- change `src/environments/environment.prod.ts`
  ```json
  {
    "baseURL": "https://{{api_domain}}/api/v1/",
    "baseStorage": "https://{{api_domain}}/storage/",
    "baseHREF": "{{base for front domain}}"
  }
 
"ng build -c production --prod --base-href https://supplier.domain.work"
ng build -c production --configuration production --base-href https://supplier.domain.com
ng build -c production --configuration production --base-href https://supplier.domain.com
ng build -c production --configuration production --base-href https://supplier.domain.org
run `npm run build {{base_href for front domain}}`
- upload `dist` folder content to server
--------------------------------------------------
copy to server
sudo nano .htaccess
<IfModule mod_rewrite.c>
        RewriteEngine On

        # -- REDIRECTION to https (optional):
        # If you need this, uncomment the next two commands
        # RewriteCond %{HTTPS} !on
        # RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
        # --

        RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -f [OR]
        RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -d

        RewriteRule ^.*$ - [NC,L]
        RewriteRule ^(.*) index.html [NC,L]
</IfModule>

-------------------------------------------------------------
Front end: admin portal angularjs

# Build Instructions (node version 16)

sudo rm -rf node
sudo npm install

- change `src/environments/environment.prod.ts`
  ```json
  {
    "baseURL": "https://{{api_domain}}/api/v1/",
    "baseStorage": "https://{{api_domain}}/storage/",
    "baseHREF": "{{base for front domain}}"
  }
 
  export const environment = {
  production: true,
  baseURL: 'https://api.domain.com/api/v1/',
  // baseURL: 'https://domain.work/newapi/public/api/v1/',
  baseStorage: 'https://api.domain.com/storage/',
  baseHREF: '/humhum/',
  firebase: {
    apiKey: 'AIzaSyBvDfgkv2VnrIMbrT9oJYgtlL6XGthURdY',
    projectId: 'humhum-d8850',
    messagingSenderId: '569896873041',
    appId: '1:569896873041:web:9551d54e2e7056fcacdc70',
    vapidKey:
      'BCg19OadFV9lZNChEu1nhKI9zW2HRqiVls8U_4UVQyRLz5rVf3-2qzUSBWdTB7U0nqa-O7lho69FM8VdRsQW970',
  },
  defaultPerPage: 20,
};

 
"ng build -c production --prod --base-href https://supplier.domain.work"
ng build -c production --configuration production --base-href https://dashboard.domain.com
ng build -c production --configuration production --base-href https://dashboard.domain.org
run `npm run build {{base_href for front domain}}`
- upload `dist` folder content to server
---------------------------------------------------------------------------------------------------------------
Front end: admin portal vuejs

# Build Instructions (node version 16)

sudo rm -rf node_modules/
sudo npm install
npm run build

- change .env
-find and remove humhum-user/
  --------------------------------------------------


copy to server
sudo nano .htaccess
<IfModule mod_rewrite.c>
        RewriteEngine On

        # -- REDIRECTION to https (optional):
        # If you need this, uncomment the next two commands
        # RewriteCond %{HTTPS} !on
        # RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
        # --

        RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -f [OR]
        RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -d

        RewriteRule ^.*$ - [NC,L]
        RewriteRule ^(.*) index.html [NC,L]
</IfModule>
-------------------------------------------------------------
Solution for all npm ERR! code EINTEGRITY errors 🙏

$ cd <project_directory>
$ rm -rf package-lock.json npm-shrinkwrap.json node_modules
$ npm cache clean --force
$ npm cache verify
$ npm install
---------------------------------------------------------------

FIXING MYSQL ISSUES:

show columns from tokens;
show column_name from table_name;

Admin login , tokens table error:
ALTER TABLE tokens MODIFY client_id bigint UNSIGNED;
ALTER TABLE tokens MODIFY user_id bigint UNSIGNED;



Admin login , after login popup error:
SET GLOBAL sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY',''));

-------------------------------------------------------------------------

git pull then run
php artisan migrate:fresh --seed
then
php artisan permission:cache-reset


------------------------------------------------------------------------------



Monday, January 16, 2023

AWS Instance to reserved insance

 

The way to get to Reserved billing is counterintuitive.

Basically, you have to pretend that you’re buying a new instance, of the Reserved type, with the exact same attributes of the one you want to convert. Upon making the purchase you will have converted the other one.

 


 

Basically, you have to pretend that you’re buying a new instance, of the Reserved type, with the exact same attributes of the one you want to convert. Upon making the purchase you will have converted the other one.

It’s gross, but it works.

  1. Navigate to https://console.aws.amazon.com/ec2/v2/home.
  2. Click on Reserved Instances.
  3. Select an instance that matches the one you want to replace the billing on, for both instance type/size and instance availability zone. For example, mine was T2.Medium and US-East-1a.
  4. Make the purchase.

So what then happens is that Amazon finds your running On Demand instance and converts it to a Reserved instance. And now if you go into billing you should see that reflected.

And more importantly, if you go into your instances dashboard you’ll just see the same ones you had before. You haven’t actually purchased a new box; you’ve just converted the one that matched those specs from On Demand to Reserved.

For anyone running a box in EC2 that isn’t likely to change in size or location over one to three years, I highly recommend you check out Reserved Instances. They could save you a massive amount of money, just like it did for me.

 

References:

https://danielmiessler.com/blog/saved-ec2-bill-5-minutes-switching-reserved-instance/