6 Tools to Forcefully Enable Grayed Out Disabled Buttons ...

CLI & GUI v0.17.1.3 'Oxygen Orion' released!

This is the CLI & GUI v0.17.1.3 'Oxygen Orion' point release. This release predominantly features bug fixes and performance improvements. Users, however, are recommended to upgrade, as it includes mitigations for the issue where transactions occasionally fail.

(Direct) download links (GUI)

(Direct) download links (CLI)

GPG signed hashes

We encourage users to check the integrity of the binaries and verify that they were signed by binaryFate's GPG key. A guide that walks you through this process can be found here for Windows and here for Linux and Mac OS X.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 # This GPG-signed message exists to confirm the SHA256 sums of Monero binaries. # # Please verify the signature against the key for binaryFate in the # source code repository (/utils/gpg_keys). # # ## CLI 38a04a7bd00733e9d943edba3004e44730c0848fe5e8a4fca4cb29c12d1e6b2f monero-android-armv7-v0.17.1.3.tar.bz2 0e94f58572646992ee21f01d291211ed3608e8a46ecb6612b378a2188390dba0 monero-android-armv8-v0.17.1.3.tar.bz2 ae1a1b61d7b4a06690cb22a3389bae5122c8581d47f3a02d303473498f405a1a monero-freebsd-x64-v0.17.1.3.tar.bz2 57d6f9c25bd1dbc9d6b39fcfb13260b21c5594b4334e8ed3b8922108730ee2f0 monero-linux-armv7-v0.17.1.3.tar.bz2 a0419993fbc6a5ca11bcd2e825acef13e429824f4d8c7ba4ec73ac446d2af2fb monero-linux-armv8-v0.17.1.3.tar.bz2 cf3fb693339caed43a935c890d71ecab5b89c430e778dc5ef0c3173c94e5bf64 monero-linux-x64-v0.17.1.3.tar.bz2 d107384ff7b1f77ee4db93940dbfda24d6045bf59c43169bc81a0118e3986bfa monero-linux-x86-v0.17.1.3.tar.bz2 79557c8bee30b229bda90bb9ee494097d639d60948fc2ad87a029359b56b1b48 monero-mac-x64-v0.17.1.3.tar.bz2 3eee0d0e896fb426ef92a141a95e36cb33ca7d1e1db3c1d4cb7383994af43a59 monero-win-x64-v0.17.1.3.zip c9e9dde61b33adccd7e794eba8ba29d820817213b40a2571282309d25e64e88a monero-win-x86-v0.17.1.3.zip # ## GUI 15ad80b2abb18ac2521398c4dad9b8bfea2e6fc535cf4ebcc60d99b8042d4fb2 monero-gui-install-win-x64-v0.17.1.3.exe 3bed02f9db5b7b2fe4115a636fecf0c6ec9079dd4e9284c8ce2c67d4996e2a4a monero-gui-linux-x64-v0.17.1.3.tar.bz2 23405534c7973a8d6908b76121b81894dc853039c942d7527d254dfde0bd2e8f monero-gui-mac-x64-v0.17.1.3.dmg 0a49ccccb561445f3d7ec0087ddc83a8b76f424fb7d5e0d725222f3639375ec4 monero-gui-win-x64-v0.17.1.3.zip # # # ~binaryFate -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAl+oVkkACgkQ8K9NRioL 35Lmpw//Xs09T4917sbnRH/DW/ovpRyjF9dyN1ViuWQW91pJb+E3i9TY+wU3q85k LyTihDB5pV+3nYgKPL9TlLfaytJIQG0vYHykPWHVmYmvoIs9BLarGwaU3bjO0rh9 ST5GDMdvxmQ5Y1LTwVfKkmBJw26DAs0xAvjBX44oRQjjuUdH6JdLPsqa5Kb++NCM b453m5s8bT3Cw6w0eJB1FQEyQ5BoDrwYcFzzsS1ag/C4Ylq0l6CZfEambfOQvdUi 7D5Rywfhiz2t7cfn7LaoXb74KDA/B1bL+R1/KhCuFqxRTOQzq9IxRywh4VptAAMU UR7jFHFijOMoyggIbkD48JmAjlBnqIyQJt4D5gbHe+tSaSoKdgoTGBAmIvaCZIng jfn9pTNzIJbTptsQhhyZqQQIH87D8BctZfX7pREjJmMNGwN2jFxXqUNqYTso20E6 YLtC1mkZBBZ294xHqT1mQpfznc6uVJhhoJpta0eKxkr1ahrGvWBDGZeVhLswnBcq 9dafAkR14rdK1naiCsygb6hMvBqBohVu/bWuhycJcv6XRvlP7UHkR6R8+s6U4Tk2 zaJERQF+cHQpEak5aEJIvDlb/mxteGyvPkPyL7UmADEQh3C4nREwkDSdnitYnF+e HxJZkshoC98+YCkWUP4+JYOOT158jKao3u0laEOxVGOrPz1Nc64= =Ys4h -----END PGP SIGNATURE----- 

Upgrading (GUI)

Note that you should be able to utilize the automatic updater in the GUI that was recently added. A pop-up will appear shortly with the new binary.
In case you want to update manually, you ought to perform the following steps:
  1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the direct download links in this thread or from the official website. If you run active AV (AntiVirus) software, I'd recommend to apply this guide -> https://monero.stackexchange.com/questions/10798/my-antivirus-av-software-blocks-quarantines-the-monero-gui-wallet-is-there
  2. Extract the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux) you just downloaded) to a new directory / folder of your liking.
  3. Open monero-wallet-gui. It should automatically load your "old" wallet.
If, for some reason, the GUI doesn't automatically load your old wallet, you can open it as follows:
[1] On the second page of the wizard (first page is language selection) choose Open a wallet from file
[2] Now select your initial / original wallet. Note that, by default, the wallet files are located in Documents\Monero\ (Windows), Users//Monero/ (Mac OS X), or home//Monero/ (Linux).
Lastly, note that a blockchain resync is not needed, i.e., it will simply pick up where it left off.

Upgrading (CLI)

You ought to perform the following steps:
  1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the official website, the direct download links in this thread, or Github.
  2. Extract the new binaries to a new directory of your liking.
  3. Copy over the wallet files from the old directory (i.e. the v0.15.x.x, v0.16.x.x, or v0.17.x.x directory).
  4. Start monerod and monero-wallet-cli (in case you have to use your wallet).
Note that a blockchain resync is not needed. Thus, if you open monerod-v0.17.1.3, it will simply pick up where it left off.

Release notes (GUI)

Some highlights of this minor release are:
  • Android support (experimental)
  • Linux binary is now reproducible (experimental)
  • Simple mode: transaction reliability improvements
  • New transaction confirmation dialog
  • Wizard: minor design changes
  • Linux: high DPI support
  • Fix "can't connect to daemon" issue
  • Minor bug fixes
Some highlights of this major release are:
  • Support for CLSAG transaction format
  • Socks5 proxy support, automatically enabled on Tails
  • Simple mode transactions are sent trough local daemon, improved reliability
  • Portable mode, save wallets + config to "storage" folder
  • History page: improvements, incoming / outgoing labels
  • Transfer: new success dialog
  • CMake build system improvements
  • Windows cross compilation support using Docker
  • Various minor bug and UI fixes
Note that you can find a full change log here.

Release notes (CLI)

Some highlights of this minor release are:
  • Add support for I2P and Tor seed nodes (--tx-proxy)
  • Add --ban-list daemon option to ban a list of IP addresses
  • Switch to Dandelion++ fluff mode if no out connections for stem mode
  • Fix a bug with relay_tx
  • Fix a rare readline related crash
  • Use /16 filtering on IPv4-within-IPv6 addresses
  • Give all hosts the same chance of being picked for connecting
  • Minor bugfixes
Some highlights of this major release are:
  • Support for CLSAG transaction format
  • Deterministic unlock times
  • Enforce claiming maximum coinbase amount
  • Serialization format changes
  • Remove most usage of Boost library
  • Always send raw transactions through P2P, don't use bootstrap daemon
  • Update InProofV1, OutProofV1, and ReserveProofV1 to V2
  • ASM optimizations for wallet refresh (macOS / Linux)
  • Randomized delay when forwarding txes from i2p/tor -> ipv4/6
  • New show_qr_code wallet command for CLI
  • Add ZMQ/Pub support for txpool_add and chain_main events
  • Various bug fixes and performance improvements
Note that you can find a full change log here.

Further remarks

  • A guide on pruning can be found here.
  • Ledger Monero users, please be aware that version 1.7.4 of the Ledger Monero App is required in order to properly use CLI or GUI v0.17.1.3.

Guides on how to get started (GUI)

https://github.com/monero-ecosystem/monero-GUI-guide/blob/mastemonero-GUI-guide.md
Older guides: (These were written for older versions, but are still somewhat applicable)
Sheep’s Noob guide to Monero GUI in Tails
https://medium.com/@Electricsheep56/the-monero-gui-wallet-broken-down-in-plain-english-bd2889b8c202

Ledger GUI guides:

How do I generate a Ledger Monero wallet with the GUI (monero-wallet-gui)?
How do I restore / recreate my Ledger Monero wallet?

Trezor GUI guides:

How do I generate a Trezor Monero wallet with the GUI (monero-wallet-gui)?
How to use Monero with Trezor - by Trezor
How do I restore / recreate my Trezor Monero wallet?

Ledger & Trezor CLI guides

Guides to resolve common issues (GUI)

My antivirus (AV) software blocks / quarantines the Monero GUI wallet, is there a work around I can utilize?
I am missing (not seeing) a transaction to (in) the GUI (zero balance)
Transaction stuck as “pending” in the GUI
How do I move the blockchain (data.mdb) to a different directory during (or after) the initial sync without losing the progress?
I am using the GUI and my daemon doesn't start anymore
My GUI feels buggy / freezes all the time
The GUI uses all my bandwidth and I can't browse anymore or use another application that requires internet connection
How do I change the language of the 25 word mnemonic seed in the GUI or CLI?
I am using remote node, but the GUI still syncs blockchain?

Using the GUI with a remote node

In the wizard, you can either select Simple mode or Simple mode (bootstrap) to utilize this functionality. Note that the GUI developers / contributors recommend to use Simple mode (bootstrap) as this mode will eventually use your own (local) node, thereby contributing to the strength and decentralization of the network. Lastly, if you manually want to set a remote node, you ought to use Advanced mode. A guide can be found here:
https://www.getmonero.org/resources/user-guides/remote_node_gui.html

Adding a new language to the GUI

https://github.com/monero-ecosystem/monero-translations/blob/masteweblate.md
If, after reading all these guides, you still require help, please post your issue in this thread and describe it in as much detail as possible. Also, feel free to post any other guides that could help people.
submitted by dEBRUYNE_1 to Monero [link] [comments]

Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
Microsoft: https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

System requirements

Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

Installation

Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later.
The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete 
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version 
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup 

Setting up CodeReady Containers

Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup 
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start 
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1

Configuration

It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.

Configuring the CodeReady Containers

To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are:
get, this command allows you to see the values of a configurable property
set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being:
○ Shell options
○ Shell attributes
○ Positional parameters
view, this command starts the configuration in read-only mode.
These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration.
There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help 

Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value.
To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs  C:\Users\[username]\$PATH>crc config set memory > 

Configuring the DNS

Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

Accessing the Openshift Cluster

Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command.
It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials 

Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env 
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression 
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression 
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary*
To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc 
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443
Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443 
Step 4
The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co 
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co 

Demonstration

Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform.
We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application.
As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled.
Lastly, we will show the user how to use user management within the platform.

Creating a project

To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before.
When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left.
Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project.
https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2
When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container.
https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210

Importing image

The Containers in OpenShift Container Platform are based on OCI or Docker formatted images. An image is a binary that contains everything needed to run a container as well as the metadata of the requirements needed for the container.
Within the OpenShift Container Platform it’s possible to obtain images in a number of ways. There is an integrated Docker registry that offers the possibility to download new images “on the fly”. In addition, OpenShift Container Platform can use third party registries such as:
- Https://hub.docker.com/
- Https://catalog.redhat.com/software/containers/search
Within this manual we are going to import an image from the Red Hat container catalog. In this example we’ll be using MediaWiki.
Search for the application in https://catalog.redhat.com/software/containers/search

https://preview.redd.it/c4mrbs0fitv51.png?width=672&format=png&auto=webp&s=f708f0542b53a9abf779be2d91d89cf09e9d2895
Navigate to “Get this image”
Follow the steps to “create a registry service account”, after that you can copy the YAML.
https://preview.redd.it/b4rrklqfitv51.png?width=1323&format=png&auto=webp&s=7a2eb14a3a1ba273b166e03e1410f06fd9ee1968
After the YAML has been copied we will go to the topology view and click on the YAML button
https://preview.redd.it/k3qzu8dgitv51.png?width=869&format=png&auto=webp&s=b1fefec67703d0a905b00765f0047fe7c6c0735b
Then we have to paste in the YAML, put in the name, namespace and your pull secret name (which you created through your registry account) and click on create.
https://preview.redd.it/iz48kltgitv51.png?width=781&format=png&auto=webp&s=4effc12e07bd294f64a326928804d9a931e4d2bd
Run the import command within powershell
$oc import-image openshift4/mediawiki --from=registry.redhat.io/openshift4/mediawiki --confirm imagestream.image.openshift.io/mediawiki imported 

Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.
https://preview.redd.it/ovv9l85jitv51.png?width=901&format=png&auto=webp&s=f78f350207add0b8a979b6da931ff29ffa30128c

Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
For this example, we will add Routes to navigation.
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  
Step 2: Now you need to clean up the data on the associated storage asset
Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition.
It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps:
Step 1: Get a list of the PVs in your cluster
$oc get pv 
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' 
In this example the reclaim policy will be changed to Retain.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}' 
In this example the reclaim policy will be changed to Recycle.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' 
In this example the reclaim policy will be changed to Delete.

Step 3: After this you can check the PV to verify the change by executing this command again:
$oc get pv 

Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity : 
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s 
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :  
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki 
Now were going to assign a role to this new user, this can be done by executing the following command:
$oc create clusterrolebinding  \ --clusterrole= --user= 
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin 

What did you achieve?

If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying an application
● Creating new users
With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.

Troubleshooting

Nameserver
There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1 
Hyper-V admin
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
  1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
  2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
  3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
  4. Click Add. The Select Users or Groups window opens.
  5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
  6. Click Apply, and then click OK.

Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

Sources

  1. https://www.ibm.com/support/knowledgecenteen/SSMKFH/com.ibm.apmaas.doc/install/hyperv_config_add_nonadmin_user_hyperv_usergroup.html
  2. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/
  3. https://docs.openshift.com/container-platform/3.11/admin_guide/manage_users.html
submitted by Groep6HHS to openshift [link] [comments]

./play.it 2.12: API, GUI and video games

./play.it 2.12: API, GUI and video games

./play.it is a free/libre software that builds native packages for several Linux distributions from DRM-free installers for a collection of commercial games. These packages can then be installed using the standard distribution-provided tools (APT, pacman, emerge, etc.).
A more complete description of ./play.it has already been posted in linux_gaming a couple months ago: ./play.it, an easy way to install commercial games on GNU/Linux
It's already been one year since version 2.11 was released, in January 2019. We will only briefly review the changelog of version 2.12 and focus on the different points of ./play.it that kept us busy during all this time, and of which coding was only a small part.

What’s new with 2.12?

Though not the focus of this article, it would be a pity not to present all the added features of this brand new version. ;)
Compared to the usual updates, 2.12 is a major one, especially since for two years, we slowed down the addition of new features. Some patches took dust since the end of 2018 before finally be integrated in this update!
The list of changes for this 2.12 release can be found on our forge. Here is a full copy for convenience:

Development migration

History

As many free/libre projects, ./play.it development started on some random sector of a creaking hard drive, and unsurprisingly, a whole part of its history (everything predating version 1.13.15 released on Mars 30th, 2016) disappeared into the limbs because some unwise operation destroyed the only copy of the repository… Lesson learned, what's not shared don't stay long, and so was born the first public Git repository of the project. The easing of collaborative work was only accidentally achieved by this quest for eternity, but wasn't the original motivation for making the repository publicly available.
Following this decision, ./play.it source code has been hosted successively by many shared forge platforms:

Dedicated forge

As development progressed, ./play.it began to increase its need for resources, dividing its code into several repositories to improve the workflow of the different aspects of the projects, adding continuous integration tests and their constraints, etc. A furious desire to understand the nooks and crannies behind a forge platform was the last deciding factor towards hosting a dedicated forge.
So it happened, we deployed a forge platform on a dedicated server, hugely benefiting from the tremendous work achieved by the GitLab's package Debian Maintainers team. In return, we tried to contribute our findings in improving this software packaging.
That was not expected, but this migration happened just a little time before the announcement “Déframasoftisons Internet !” (French article) about the planned end of Framagit.
This dedicated instance used to be hosted on a VPS rented from Digital Ocean until the second half of July 2020, and since then has been moved to another VPS, rented from Hetzner. The specifications are similar, as well as the service, but thanks to this migration our hosting costs have been cut in half. Keeping in mind that this is paid by a single person, so any little donation helps a lot on this front. ;)
To the surprise of our system administrator, this last migration took only a couple hours with no service interruption reported by our users.

Forge access

This new forge can be found at forge.dotslashplay.it. Registrations are open to the public, but we ask you to not abuse this, the main restriction being that we do not wish to host projects unrelated to ./play.it. Of course exceptions are made for our active contributors, who are allowed to host some personal projects there.
So, if you wish to use this forge to host your own work, you first need to make some significant contributions to ./play.it.

API

The collection of supported games growing endlessly, we have started the development of a public API allowing access to lots of information related to ./play.it.
This API, which is not yet stabilized, is simply an interface to a versioned database containing all the ./play.it scripts, handled archives, games installable through the project. Relations are, of course, handled between those items, enabling its use for requests like : « What packages are required on my system to install Cæsar Ⅲ ? » or « What are the free (as in beer) games handled via DOSBox ? ».
Originally developed as support for the new, in-development, Web site (we'll talk about it later on), this API should facilitate the development of tools around ./play.it. For example, it'll be useful for whomever would like to build a complete video game handling software (downloading, installation, starting, etc.) using ./play.it as one of its building bricks.
For those curious about the technical side, it's an API based on Lumeneffectuant that makes requests on a MariaDB database, all self-hosted on a Debian Sid. Not only is the code of the API versioned on our forge, but also the structure and content of the databases, which will allow those who desired it to install a local version easily.

New website

Based on the aforementioned API, a new website is under development and will replace our current website based on DokuWiki.
Indeed, if the lack of database and the plain text files structure of DokuWiki seemed at first attractive, as ./play.it supported only a handful of games (link in French), this feature became more inconvenient as the library of ./play.it supported games grew.
We shall make an in-depth presentation of this website for the 2.13 release of ./play.it, but a public demo of the development version from our forge is already available.
If you feel like providing an helping hand on this task, some priority tasks have been identified to allow opening a new Web site able to replace the current one. And for those interested in technical details, this web Site was developed in PHP using the framework Laravel. The current in-development version is hosted for now on the same Debian Sid than the API.

GUI

A regular comment that is done about the project is that, if the purpose is to make installing games accessible to everyone without technical skills, having to run scripts in the terminal remains somewhat intimidating. Our answer until now has been that while the project itself doesn't aim to providing a graphical interface (KISS principle "Keep it simple, stupid"), still and always), but that it would be relatively easy to, later on, develop a graphical front-end to it.
Well, it happens that is now reality. Around the time of our latest publication, one of our contributors, using the API we just talked about, developed a small prototype that is usable enough to warrant a little shout out. :-)
In practice, it is some small Python 3 code (an HCI completely in POSIX shell is for a later date :-°), using GTK 3 (and still a VTE terminal to display the commands issued, but the user shouldn't have to input anything in it, except perhaps the root password to install some packages). This allowed to verify that, as we used to say, it would be relatively easy, since a script of less than 500 lines of code (written quickly over a week-end) was enough to do the job !
Of course, this graphical interface project stays independent from the main project, and is maintained in a specific repository. It seems interesting to us to promote it in order to ease the use of ./play.it, but this doesn't prevent any other similar projects to be born, for example using a different language or graphical toolkit (we, globally, don't have any particular affinity towards Python or GTK).
The use of this HCI needs three steps : first, a list of available games is displayed, coming directly from our API. You just need to select in the list (optionally using the search bar) the game you want to install. Then it switches to a second display, which list the required files. If several alternatives are available, the user can select the one he wants to use. All those files must be in the same directory, the address bar on the top enabling to select which one to use (click on the open button on the top opens a filesystem navigation window). Once all those files available (if they can be downloaded, the software will do it automatically), you can move ahead to the third step, which is just watching ./play.it do its job :-) Once done, a simple click on the button on the bottom will run the game (even if, from this step, the game is fully integrated on your system as usual, you no longer need this tool to run it).
To download potentially missing files, the HCI will use, depending on what's available on the system, either wget, curl or aria2c (this last one also handling torrents), of which the output will be displayed in the terminal of the third phase, just before running the scripts. For privilege escalation to install packages, sudo will be used preferentially if available (with the option to use a third-party application for password input, if the corresponding environment variable is set, which is more user-friendly), else su will be used.
Of course, any suggestion for an improvement will be received with pleasure.

New games

Of course, such an announcement would not be complete without a list of the games that got added to our collection since the 2.11 release… So here you go:
If your favourite game is not supported by ./play.it yet, you should ask for it in the dedicated tracker on our forge. The only requirement to be a valid request is that there exists a version of the game that is not burdened by DRM.

What’s next?

Our team being inexhaustible, work on the future 2.13 version has already begun…
A few major objectives of this next version are :
If your desired features aren't on this list, don't hesitate to signal it us, in the comments of this news release. ;)

Links

submitted by vv224 to linux_gaming [link] [comments]

Unable to run custom scripts via dmenu when it is started with i3's mod+d key

I have encountered strange behaviour regarding dmenu_run and dmenu_recency. When I run dmenu_run or dmenu_recency from terminal and then execute simple script like echo "test" value test is printed in the terminal. However when I run dmenu_recency or dmenu_run with i3 keybinding like:
bindsym $mod+d exec --no-startup-id dmenu_recency
and then execute same simple script, then nothing happens. Dmenu lunches for other installed programs works well, it just doesen't work for execuution of my custom scripts.
What am I missing here? I suspect I have to add something else to my scripts but i dont know what. For now it is jsut plain this:
echo "test"

EDIT: Ok maybe script: echo "test" is not the best example since it is true that there is no opened terminal to write to.
But same thing happens if I try to execute script that looks like this:
code ~/.i3/config
This jsut opens the i33 config file with visual studio code. Again this works when I execute it via dmenu_run that was called from existing termina but it doesen't work when executed via dmenu_run that was called via i3 keybinding mod+d
EDIT 2:
.i3/config
# i3 config file (v4) # Please see http://i3wm.org/docs/userguide.html for a complete reference! # Set mod key (Mod1=, Mod4=) set $mod Mod4 # My testing shortcuts bindsym $mod+c exec code bindsym $mod+Shift+x exec terminal; exec terminal bindsym $mod+F4 exec /home/erik/Programs/pycharm-community-2020.2.1/bin/pycharm.sh bindsym $mod+Shift+F2 exec /home/erik/CustomScripts/google_calendar # CONFIGURABLE PRINTSCREENS OPTIONS # take a screenshot of a screen region and copy it to a clipboard #bindsym --release Shift+Print exec "ScreenCapture.sh -s /home/erik/Pictures/Screenshots/" # take a screenshot of a whole window and copy it to a clipboard #bindsym --release Print exec "ScreenCapture.sh /home/erik/Pictures/Screenshots/" # set default desktop layout (default is tiling) # workspace_layout tabbed  # Configure border style  default_border pixel 2 default_floating_border normal # Hide borders hide_edge_borders none # change borders bindsym $mod+u border none bindsym $mod+y border pixel 1 bindsym $mod+n border normal # You can also use any non-zero value if you'd like to have a border (this is to prevent issues with gaps) # for_window [class=".*"] border pixel 1 # Font for window titles. Will also be used by the bar unless a different font # is used in the bar {} block below. font xft:URWGothic-Book 11 # Use Mouse+$mod to drag floating windows floating_modifier $mod # start a terminal bindsym $mod+Return exec terminal # kill focused window bindsym $mod+Shift+q kill # start program launcher # bindsym $mod+d exec --no-startup-id dmenu_recency bindsym $mod+d exec --no-startup-id home/erik/CustomScripts/redit_solution dmenu_recency # launch categorized menu bindsym $mod+z exec --no-startup-id morc_menu ################################################################################################ ## sound-section - DO NOT EDIT if you wish to automatically upgrade Alsa -> Pulseaudio later! ## ################################################################################################ #exec --no-startup-id volumeicon #bindsym $mod+Ctrl+m exec terminal -e 'alsamixer' exec --no-startup-id start-pulseaudio-x11 exec --no-startup-id pa-applet bindsym $mod+Ctrl+m exec pavucontrol ################################################################################################ # Screen brightness controls # bindsym XF86MonBrightnessUp exec "xbacklight -inc 10; notify-send 'brightness up'" # bindsym XF86MonBrightnessDown exec "xbacklight -dec 10; notify-send 'brightness down'" # Start Applications bindsym $mod+Ctrl+b exec terminal -e 'bmenu' bindsym $mod+F2 exec chromium bindsym $mod+F3 exec pcmanfm # bindsym $mod+F3 exec ranger bindsym $mod+Shift+F3 exec pcmanfm_pkexec bindsym $mod+F5 exec terminal -e 'mocp' bindsym $mod+t exec --no-startup-id pkill compton bindsym $mod+Ctrl+t exec --no-startup-id compton -b bindsym $mod+Shift+d --release exec "killall dunst; exec notify-send 'restart dunst'" bindsym Print exec --no-startup-id i3-scrot bindsym $mod+Print --release exec --no-startup-id i3-scrot -w bindsym $mod+Shift+Print --release exec --no-startup-id i3-scrot -s bindsym $mod+Shift+h exec xdg-open /usshare/doc/manjaro/i3_help.pdf bindsym $mod+Ctrl+x --release exec --no-startup-id xkill focus_follows_mouse no # change focus bindsym $mod+j focus left bindsym $mod+k focus down bindsym $mod+l focus up bindsym $mod+semicolon focus right # alternatively, you can use the cursor keys: bindsym $mod+Left focus left bindsym $mod+Down focus down bindsym $mod+Up focus up bindsym $mod+Right focus right # move focused window bindsym $mod+Shift+j move left bindsym $mod+Shift+k move down bindsym $mod+Shift+l move up bindsym $mod+Shift+semicolon move right # alternatively, you can use the cursor keys: bindsym $mod+Shift+Left move left bindsym $mod+Shift+Down move down bindsym $mod+Shift+Up move up bindsym $mod+Shift+Right move right # workspace back and forth (with/without active container) workspace_auto_back_and_forth yes bindsym $mod+b workspace back_and_forth bindsym $mod+Shift+b move container to workspace back_and_forth; workspace back_and_forth # split orientation bindsym $mod+h split h;exec notify-send 'tile horizontally' bindsym $mod+v split v;exec notify-send 'tile vertically' bindsym $mod+q split toggle # toggle fullscreen mode for the focused container bindsym $mod+f fullscreen toggle # change container layout (stacked, tabbed, toggle split) bindsym $mod+s layout stacking bindsym $mod+w layout tabbed bindsym $mod+e layout toggle split # toggle tiling / floating bindsym $mod+Shift+space floating toggle # change focus between tiling / floating windows bindsym $mod+space focus mode_toggle # toggle sticky bindsym $mod+Shift+s sticky toggle # focus the parent container bindsym $mod+a focus parent # move the currently focused window to the scratchpad bindsym $mod+Shift+minus move scratchpad # Show the next scratchpad window or hide the focused scratchpad window. # If there are multiple scratchpad windows, this command cycles through them. bindsym $mod+minus scratchpad show #navigate workspaces next / previous bindsym $mod+Ctrl+Right workspace next bindsym $mod+Ctrl+Left workspace prev # Workspace names # to display names or symbols instead of plain workspace numbers you can use # something like: set $ws1 1:mail # set $ws2 2: set $ws1 1 set $ws2 2 set $ws3 3 set $ws4 4 set $ws5 5 set $ws6 6 set $ws7 7 set $ws8 8 # switch to workspace bindsym $mod+1 workspace $ws1 bindsym $mod+2 workspace $ws2 bindsym $mod+3 workspace $ws3 bindsym $mod+4 workspace $ws4 bindsym $mod+5 workspace $ws5 bindsym $mod+6 workspace $ws6 bindsym $mod+7 workspace $ws7 bindsym $mod+8 workspace $ws8 # Move focused container to workspace bindsym $mod+Ctrl+1 move container to workspace $ws1 bindsym $mod+Ctrl+2 move container to workspace $ws2 bindsym $mod+Ctrl+3 move container to workspace $ws3 bindsym $mod+Ctrl+4 move container to workspace $ws4 bindsym $mod+Ctrl+5 move container to workspace $ws5 bindsym $mod+Ctrl+6 move container to workspace $ws6 bindsym $mod+Ctrl+7 move container to workspace $ws7 bindsym $mod+Ctrl+8 move container to workspace $ws8 # Move to workspace with focused container bindsym $mod+Shift+1 move container to workspace $ws1; workspace $ws1 bindsym $mod+Shift+2 move container to workspace $ws2; workspace $ws2 bindsym $mod+Shift+3 move container to workspace $ws3; workspace $ws3 bindsym $mod+Shift+4 move container to workspace $ws4; workspace $ws4 bindsym $mod+Shift+5 move container to workspace $ws5; workspace $ws5 bindsym $mod+Shift+6 move container to workspace $ws6; workspace $ws6 bindsym $mod+Shift+7 move container to workspace $ws7; workspace $ws7 bindsym $mod+Shift+8 move container to workspace $ws8; workspace $ws8 # Open applications on specific workspaces # assign [class="Thunderbird"] $ws1 # assign [class="Pale moon"] $ws2 # assign [class="Pcmanfm"] $ws3 # assign [class="Skype"] $ws5 # Open specific applications in floating mode for_window [title="alsamixer"] floating enable border pixel 1 for_window [class="calamares"] floating enable border normal for_window [class="Clipgrab"] floating enable for_window [title="File Transfer*"] floating enable for_window [class="fpakman"] floating enable for_window [class="Galculator"] floating enable border pixel 1 for_window [class="GParted"] floating enable border normal for_window [title="i3_help"] floating enable sticky enable border normal for_window [class="Lightdm-settings"] floating enable for_window [class="Lxappearance"] floating enable sticky enable border normal for_window [class="Manjaro-hello"] floating enable for_window [class="Manjaro Settings Manager"] floating enable border normal for_window [title="MuseScore: Play Panel"] floating enable for_window [class="Nitrogen"] floating enable sticky enable border normal for_window [class="Oblogout"] fullscreen enable for_window [class="octopi"] floating enable for_window [title="About Pale Moon"] floating enable for_window [class="Pamac-manager"] floating enable for_window [class="Pavucontrol"] floating enable for_window [class="qt5ct"] floating enable sticky enable border normal for_window [class="Qtconfig-qt4"] floating enable sticky enable border normal for_window [class="Simple-scan"] floating enable border normal for_window [class="(?i)System-config-printer.py"] floating enable border normal for_window [class="Skype"] floating enable border normal for_window [class="Timeset-gui"] floating enable border normal for_window [class="(?i)virtualbox"] floating enable border normal for_window [class="Xfburn"] floating enable # switch to workspace with urgent window automatically for_window [urgent=latest] focus # reload the configuration file bindsym $mod+Shift+c reload # restart i3 inplace (preserves your layout/session, can be used to upgrade i3) bindsym $mod+Shift+r restart # exit i3 (logs you out of your X session) bindsym $mod+Shift+e exec "i3-nagbar -t warning -m 'You pressed the exit shortcut. Do you really want to exit i3? This will end your X session.' -b 'Yes, exit i3' 'i3-msg exit'" # Set shut down, restart and locking features bindsym $mod+0 mode "$mode_system" set $mode_system (l)ock, (e)xit, switch_(u)ser, (s)uspend, (h)ibernate, (r)eboot, (Shift+s)hutdown mode "$mode_system" { bindsym l exec --no-startup-id i3exit lock, mode "default" bindsym s exec --no-startup-id i3exit suspend, mode "default" bindsym u exec --no-startup-id i3exit switch_user, mode "default" bindsym e exec --no-startup-id i3exit logout, mode "default" bindsym h exec --no-startup-id i3exit hibernate, mode "default" bindsym r exec --no-startup-id i3exit reboot, mode "default" bindsym Shift+s exec --no-startup-id i3exit shutdown, mode "default" # exit system mode: "Enter" or "Escape" bindsym Return mode "default" bindsym Escape mode "default" } # Resize window (you can also use the mouse for that) bindsym $mod+r mode "resize" mode "resize" { # These bindings trigger as soon as you enter the resize mode # Pressing left will shrink the window’s width. # Pressing right will grow the window’s width. # Pressing up will shrink the window’s height. # Pressing down will grow the window’s height. bindsym j resize shrink width 5 px or 5 ppt bindsym k resize grow height 5 px or 5 ppt bindsym l resize shrink height 5 px or 5 ppt bindsym semicolon resize grow width 5 px or 5 ppt # same bindings, but for the arrow keys bindsym Left resize shrink width 5 px or 5 ppt bindsym Down resize grow height 5 px or 5 ppt bindsym Up resize shrink height 5 px or 5 ppt bindsym Right resize grow width 5 px or 5 ppt # exit resize mode: Enter or Escape bindsym Return mode "default" bindsym Escape mode "default" } # Lock screen bindsym $mod+9 exec --no-startup-id blurlock # Autostart applications exec --no-startup-id /uslib/polkit-gnome/polkit-gnome-authentication-agent-1 exec --no-startup-id nitrogen --restore; sleep 1; compton -b # exec --no-startup-id manjaro-hello exec --no-startup-id nm-applet exec --no-startup-id xfce4-power-manager exec --no-startup-id pamac-tray exec --no-startup-id clipit exec --no-startup-id picom # exec --no-startup-id blueman-applet # exec_always --no-startup-id sbxkb exec --no-startup-id start_conky_maia # exec --no-startup-id start_conky_green exec --no-startup-id xautolock -time 10 -locker blurlock exec_always --no-startup-id ff-theme-util exec_always --no-startup-id fix_xcursor # Color palette used for the terminal ( ~/.Xresources file ) # Colors are gathered based on the documentation: # https://i3wm.org/docs/userguide.html#xresources # Change the variable name at the place you want to match the color # of your terminal like this: # [example] # If you want your bar to have the same background color as your # terminal background change the line 362 from: # background #14191D # to: # background $term_background # Same logic applied to everything else. set_from_resource $term_background background set_from_resource $term_foreground foreground set_from_resource $term_color0 color0 set_from_resource $term_color1 color1 set_from_resource $term_color2 color2 set_from_resource $term_color3 color3 set_from_resource $term_color4 color4 set_from_resource $term_color5 color5 set_from_resource $term_color6 color6 set_from_resource $term_color7 color7 set_from_resource $term_color8 color8 set_from_resource $term_color9 color9 set_from_resource $term_color10 color10 set_from_resource $term_color11 color11 set_from_resource $term_color12 color12 set_from_resource $term_color13 color13 set_from_resource $term_color14 color14 set_from_resource $term_color15 color15 # Start i3bar to display a workspace bar (plus the system information i3status if available) bar { i3bar_command i3bar status_command i3status position bottom ## please set your primary output first. Example: 'xrandr --output eDP1 --primary' # tray_output primary # tray_output eDP1 bindsym button4 nop bindsym button5 nop # font xft:URWGothic-Book 11 strip_workspace_numbers yes colors { background #222D31 statusline #F9FAF9 separator #ff9a1f # border backgr. text focused_workspace #ff9a1f #ff9a1f #292F34 active_workspace #595B5B #353836 #FDF6E3 inactive_workspace #595B5B #222D31 #EEE8D5 binding_mode #16a085 #2C2C2C #F9FAF9 urgent_workspace #16a085 #FDF6E3 #E5201D } } # hide/unhide i3status bar bindsym $mod+m bar mode toggle # Theme colors # class border backgr. text indic. child_border client.focused #ff9a1f #ff9a1f #000000 #ff9a1f client.focused_inactive #2F3D44 #2F3D44 #1ABC9C #454948 client.unfocused #2F3D44 #2F3D44 #1ABC9C #454948 client.urgent #CB4B16 #FDF6E3 #1ABC9C #268BD2 client.placeholder #000000 #0c0c0c #ffffff #000000 client.background #2B2C2B ############################# ### settings for i3-gaps: ### ############################# # Set inneouter gaps gaps inner 0 gaps outer 0 # Additionally, you can issue commands with the following syntax. This is useful to bind keys to changing the gap size. # gaps inner|outer current|all set|plus|minus  # gaps inner all set 10 # gaps outer all plus 5 # Smart gaps (gaps used if only more than one container on the workspace) smart_gaps on # Smart borders (draw borders around container only if it is not the only container on this workspace) # on|no_gaps (on=always activate and no_gaps=only activate if the gap size to the edge of the screen is 0) smart_borders on # Press $mod+Shift+g to enter the gap mode. Choose o or i for modifying outeinner gaps. Press one of + / - (in-/decrement for current workspace) or 0 (remove gaps for current workspace). If you also press Shift with these keys, the change will be global for all workspaces. set $mode_gaps Gaps: (o) outer, (i) inner set $mode_gaps_outer Outer Gaps: +|-|0 (local), Shift + +|-|0 (global) set $mode_gaps_inner Inner Gaps: +|-|0 (local), Shift + +|-|0 (global) bindsym $mod+Shift+g mode "$mode_gaps" mode "$mode_gaps" { bindsym o mode "$mode_gaps_outer" bindsym i mode "$mode_gaps_inner" bindsym Return mode "default" bindsym Escape mode "default" } mode "$mode_gaps_inner" { bindsym plus gaps inner current plus 5 bindsym minus gaps inner current minus 5 bindsym 0 gaps inner current set 0 bindsym Shift+plus gaps inner all plus 5 bindsym Shift+minus gaps inner all minus 5 bindsym Shift+0 gaps inner all set 0 bindsym Return mode "default" bindsym Escape mode "default" } mode "$mode_gaps_outer" { bindsym plus gaps outer current plus 5 bindsym minus gaps outer current minus 5 bindsym 0 gaps outer current set 0 bindsym Shift+plus gaps outer all plus 5 bindsym Shift+minus gaps outer all minus 5 bindsym Shift+0 gaps outer all set 0 bindsym Return mode "default" bindsym Escape mode "default" } 
.bashrc
# # ~/.bashrc # [[ $- != *i* ]] && return colors() { local fgc bgc vals seq0 printf "Color escapes are %s\n" '\e[${value};...;${value}m' printf "Values 30..37 are \e[33mforeground colors\e[m\n" printf "Values 40..47 are \e[43mbackground colors\e[m\n" printf "Value 1 gives a \e[1mbold-faced look\e[m\n\n" # foreground colors for fgc in {30..37}; do # background colors for bgc in {40..47}; do fgc=${fgc#37} # white bgc=${bgc#40} # black vals="${fgc:+$fgc;}${bgc}" vals=${vals%%;} seq0="${vals:+\e[${vals}m}" printf " %-9s" "${seq0:-(default)}" printf " ${seq0}TEXT\e[m" printf " \e[${vals:+${vals+$vals;}}1mBOLD\e[m" done echo; echo done } [ -r /usshare/bash-completion/bash_completion ] && . /usshare/bash-completion/bash_completion # Change the window title of X terminals case ${TERM} in xterm*|rxvt*|Eterm*|aterm|kterm|gnome*|interix|konsole*) PROMPT_COMMAND='echo -ne "\033]0;${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/\~}\007"' ;; screen*) PROMPT_COMMAND='echo -ne "\033_${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/\~}\033\\"' ;; esac use_color=true # Set colorful PS1 only on colorful terminals. # dircolors --print-database uses its own built-in database # instead of using /etc/DIR_COLORS. Try to use the external file # first to take advantage of user additions. Use internal bash # globbing instead of external grep binary. safe_term=${TERM//[^[:alnum:]]/?} # sanitize TERM match_lhs="" [[ -f ~/.dir_colors ]] && match_lhs="${match_lhs}$(<~/.dir_colors)" [[ -f /etc/DIR_COLORS ]] && match_lhs="${match_lhs}$(/dev/null \ && match_lhs=$(dircolors --print-database) [[ $'\n'${match_lhs} == *$'\n'"TERM "${safe_term}* ]] && use_color=true if ${use_color} ; then # Enable colors for ls, etc. Prefer ~/.dir_colors #64489 if type -P dircolors >/dev/null ; then if [[ -f ~/.dir_colors ]] ; then eval $(dircolors -b ~/.dir_colors) elif [[ -f /etc/DIR_COLORS ]] ; then eval $(dircolors -b /etc/DIR_COLORS) fi fi if [[ ${EUID} == 0 ]] ; then PS1='\[\033[01;31m\][\h\[\033[01;36m\] \W\[\033[01;31m\]]\$\[\033[00m\] ' else PS1='\[\033[01;32m\][\[email protected]\h\[\033[01;37m\] \W\[\033[01;32m\]]\$\[\033[00m\] ' fi alias ls='ls --color=auto' alias grep='grep --colour=auto' alias egrep='egrep --colour=auto' alias fgrep='fgrep --colour=auto' else if [[ ${EUID} == 0 ]] ; then # show [email protected] when we don't have colors PS1='\[email protected]\h \W \$ ' else PS1='\[email protected]\h \w \$ ' fi fi unset use_color safe_term match_lhs sh alias cp="cp -i" # confirm before overwriting something alias df='df -h' # human-readable sizes alias free='free -m' # show sizes in MB alias np='nano -w PKGBUILD' alias more=less xhost +local:root > /dev/null 2>&1 complete -cf sudo # Bash won't get SIGWINCH if another process is in the foreground. # Enable checkwinsize so that bash will check the terminal size when # it regains control. #65623 # http://cnswww.cns.cwru.edu/~chet/bash/FAQ (E11) shopt -s checkwinsize shopt -s expand_aliases # export QT_SELECT=4 # Enable history appending instead of overwriting. #139609 shopt -s histappend # # # ex - archive extractor # # usage: ex  ex () { if [ -f $1 ] ; then case $1 in *.tar.bz2) tar xjf $1 ;; *.tar.gz) tar xzf $1 ;; *.bz2) bunzip2 $1 ;; *.rar) unrar x $1 ;; *.gz) gunzip $1 ;; *.tar) tar xf $1 ;; *.tbz2) tar xjf $1 ;; *.tgz) tar xzf $1 ;; *.zip) unzip $1 ;; *.Z) uncompress $1;; *.7z) 7z x $1 ;; *) echo "'$1' cannot be extracted via ex()" ;; esac else echo "'$1' is not a valid file" fi } #Custom programs export PATH="/home/uusePrograms/pycharm-community-2020.2.1/bin:$PATH" # Custom scritps export PATH="/home/useCustomScripts:$PATH" 

submitted by Amuoeba8 to i3wm [link] [comments]

Gridcoin 5.0.0.0-Mandatory "Fern" Release

https://github.com/gridcoin-community/Gridcoin-Research/releases/tag/5.0.0.0
Finally! After over ten months of development and testing, "Fern" has arrived! This is a whopper. 240 pull requests merged. Essentially a complete rewrite that was started with the scraper (the "neural net" rewrite) in "Denise" has now been completed. Practically the ENTIRE Gridcoin specific codebase resting on top of the vanilla Bitcoin/Peercoin/Blackcoin vanilla PoS code has been rewritten. This removes the team requirement at last (see below), although there are many other important improvements besides that.
Fern was a monumental undertaking. We had to encode all of the old rules active for the v10 block protocol in new code and ensure that the new code was 100% compatible. This had to be done in such a way as to clear out all of the old spaghetti and ring-fence it with tightly controlled class implementations. We then wrote an entirely new, simplified ruleset for research rewards and reengineered contracts (which includes beacon management, polls, and voting) using properly classed code. The fundamentals of Gridcoin with this release are now on a very sound and maintainable footing, and the developers believe the codebase as updated here will serve as the fundamental basis for Gridcoin's future roadmap.
We have been testing this for MONTHS on testnet in various stages. The v10 (legacy) compatibility code has been running on testnet continuously as it was developed to ensure compatibility with existing nodes. During the last few months, we have done two private testnet forks and then the full public testnet testing for v11 code (the new protocol which is what Fern implements). The developers have also been running non-staking "sentinel" nodes on mainnet with this code to verify that the consensus rules are problem-free for the legacy compatibility code on the broader mainnet. We believe this amount of testing is going to result in a smooth rollout.
Given the amount of changes in Fern, I am presenting TWO changelogs below. One is high level, which summarizes the most significant changes in the protocol. The second changelog is the detailed one in the usual format, and gives you an inkling of the size of this release.

Highlights

Protocol

Note that the protocol changes will not become active until we cross the hard-fork transition height to v11, which has been set at 2053000. Given current average block spacing, this should happen around October 4, about one month from now.
Note that to get all of the beacons in the network on the new protocol, we are requiring ALL beacons to be validated. A two week (14 day) grace period is provided by the code, starting at the time of the transition height, for people currently holding a beacon to validate the beacon and prevent it from expiring. That means that EVERY CRUNCHER must advertise and validate their beacon AFTER the v11 transition (around Oct 4th) and BEFORE October 18th (or more precisely, 14 days from the actual date of the v11 transition). If you do not advertise and validate your beacon by this time, your beacon will expire and you will stop earning research rewards until you advertise and validate a new beacon. This process has been made much easier by a brand new beacon "wizard" that helps manage beacon advertisements and renewals. Once a beacon has been validated and is a v11 protocol beacon, the normal 180 day expiration rules apply. Note, however, that the 180 day expiration on research rewards has been removed with the Fern update. This means that while your beacon might expire after 180 days, your earned research rewards will be retained and can be claimed by advertising a beacon with the same CPID and going through the validation process again. In other words, you do not lose any earned research rewards if you do not stake a block within 180 days and keep your beacon up-to-date.
The transition height is also when the team requirement will be relaxed for the network.

GUI

Besides the beacon wizard, there are a number of improvements to the GUI, including new UI transaction types (and icons) for staking the superblock, sidestake sends, beacon advertisement, voting, poll creation, and transactions with a message. The main screen has been revamped with a better summary section, and better status icons. Several changes under the hood have improved GUI performance. And finally, the diagnostics have been revamped.

Blockchain

The wallet sync speed has been DRASTICALLY improved. A decent machine with a good network connection should be able to sync the entire mainnet blockchain in less than 4 hours. A fast machine with a really fast network connection and a good SSD can do it in about 2.5 hours. One of our goals was to reduce or eliminate the reliance on snapshots for mainnet, and I think we have accomplished that goal with the new sync speed. We have also streamlined the in-memory structures for the blockchain which shaves some memory use.
There are so many goodies here it is hard to summarize them all.
I would like to thank all of the contributors to this release, but especially thank @cyrossignol, whose incredible contributions formed the backbone of this release. I would also like to pay special thanks to @barton2526, @caraka, and @Quezacoatl1, who tirelessly helped during the testing and polishing phase on testnet with testing and repeated builds for all architectures.
The developers are proud to present this release to the community and we believe this represents the starting point for a true renaissance for Gridcoin!

Summary Changelog

Accrual

Changed

Most significantly, nodes calculate research rewards directly from the magnitudes in EACH superblock between stakes instead of using a two- or three- point average based on a CPID's current magnitude and the magnitude for the CPID when it last staked. For those long-timers in the community, this has been referred to as "Superblock Windows," and was first done in proof-of-concept form by @denravonska.

Removed

Beacons

Added

Changed

Removed

Unaltered

As a reminder:

Superblocks

Added

Changed

Removed

Voting

Added

Changed

Removed

Detailed Changelog

[5.0.0.0] 2020-09-03, mandatory, "Fern"

Added

Changed

Removed

Fixed

submitted by jamescowens to gridcoin [link] [comments]

CLI & GUI v0.16.0.3 'Nitrogen Nebula' released!

This is the CLI & GUI v0.16.0.3 'Nitrogen Nebula' point release. This release predominantly features bug fixes and performance improvements.

(Direct) download links (GUI)

(Direct) download links (CLI)

GPG signed hashes

We encourage users to check the integrity of the binaries and verify that they were signed by binaryFate's GPG key. A guide that walks you through this process can be found here for Windows and here for Linux and Mac OS X.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 # This GPG-signed message exists to confirm the SHA256 sums of Monero binaries. # # Please verify the signature against the key for binaryFate in the # source code repository (/utils/gpg_keys). # # ## CLI 75b198869a3a117b13b9a77b700afe5cee54fd86244e56cb59151d545adbbdfd monero-android-armv7-v0.16.0.3.tar.bz2 b48918a167b0961cdca524fad5117247239d7e21a047dac4fc863253510ccea1 monero-android-armv8-v0.16.0.3.tar.bz2 727a1b23fbf517bf2f1878f582b3f5ae5c35681fcd37bb2560f2e8ea204196f3 monero-freebsd-x64-v0.16.0.3.tar.bz2 6df98716bb251257c3aab3cf1ab2a0e5b958ecf25dcf2e058498783a20a84988 monero-linux-armv7-v0.16.0.3.tar.bz2 6849446764e2a8528d172246c6b385495ac60fffc8d73b44b05b796d5724a926 monero-linux-armv8-v0.16.0.3.tar.bz2 cb67ad0bec9a342b0f0be3f1fdb4a2c8d57a914be25fc62ad432494779448cc3 monero-linux-x64-v0.16.0.3.tar.bz2 49aa85bb59336db2de357800bc796e9b7d94224d9c3ebbcd205a8eb2f49c3f79 monero-linux-x86-v0.16.0.3.tar.bz2 16a5b7d8dcdaff7d760c14e8563dd9220b2e0499c6d0d88b3e6493601f24660d monero-mac-x64-v0.16.0.3.tar.bz2 5d52712827d29440d53d521852c6af179872c5719d05fa8551503d124dec1f48 monero-win-x64-v0.16.0.3.zip ff094c5191b0253a557be5d6683fd99e1146bf4bcb99dc8824bd9a64f9293104 monero-win-x86-v0.16.0.3.zip # ## GUI 50fe1d2dae31deb1ee542a5c2165fc6d6c04b9a13bcafde8a75f23f23671d484 monero-gui-install-win-x64-v0.16.0.3.exe 20c03ddb1c82e1bcb73339ef22f409e5850a54042005c6e97e42400f56ab2505 monero-gui-linux-x64-v0.16.0.3.tar.bz2 574a84148ee6af7119fda6b9e2859e8e9028fe8a8eec4dfdd196aeade47e9c90 monero-gui-mac-x64-v0.16.0.3.dmg 371cb4de2c9ccb5ed99b2622068b6aeea5bdfc7b9805340ea7eb92e7c17f2478 monero-gui-win-x64-v0.16.0.3.zip # # # ~binaryFate -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAl81bL8ACgkQ8K9NRioL 35J+UA//bgY6Mhikh8Cji8i2bmGXEmGvvWMAHJiAtAG2lgW3BT9BHAFMfEpUP5rk svFNsUY/Uurtzxwc/myTPWLzvXVMHzaWJ/EMKV9/C3xrDzQxRnl/+HRS38aT/D+N gaDjchCfk05NHRIOWkO3+2Erpn3gYZ/VVacMo3KnXnQuMXvAkmT5vB7/3BoosOU+ B1Jg5vPZFCXyZmPiMQ/852Gxl5FWi0+zDptW0jrywaS471L8/ZnIzwfdLKgMO49p Fek1WUUy9emnnv66oITYOclOKoC8IjeL4E1UHSdTnmysYK0If0thq5w7wIkElDaV avtDlwqp+vtiwm2svXZ08rqakmvPw+uqlYKDSlH5lY9g0STl8v4F3/aIvvKs0bLr My2F6q9QeUnCZWgtkUKsBy3WhqJsJ7hhyYd+y+sBFIQH3UVNv5k8XqMIXKsrVgmn lRSolLmb1pivCEohIRXl4SgY9yzRnJT1OYHwgsNmEC5T9f019QjVPsDlGNwjqgqB S+Theb+pQzjOhqBziBkRUJqJbQTezHoMIq0xTn9j4VsvRObYNtkuuBQJv1wPRW72 SPJ53BLS3WkeKycbJw3TO9r4BQDPoKetYTE6JctRaG3pSG9VC4pcs2vrXRWmLhVX QUb0V9Kwl9unD5lnN17dXbaU3x9Dc2pF62ZAExgNYfuCV/pTJmc= =bbBm -----END PGP SIGNATURE----- 

Upgrading (GUI)

Note that you should be able to utilize the automatic updater in the GUI that was recently added. A pop-up will appear with the new binary.
In case you want to update manually, you ought to perform the following steps:
  1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the direct download links in this thread or from the official website. If you run active AV (AntiVirus) software, I'd recommend to apply this guide -> https://monero.stackexchange.com/questions/10798/my-antivirus-av-software-blocks-quarantines-the-monero-gui-wallet-is-there
  2. Extract the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux) you just downloaded) to a new directory / folder of your liking.
  3. Open monero-wallet-gui. It should automatically load your "old" wallet.
If, for some reason, the GUI doesn't automatically load your old wallet, you can open it as follows:
[1] On the second page of the wizard (first page is language selection) choose Open a wallet from file
[2] Now select your initial / original wallet. Note that, by default, the wallet files are located in Documents\Monero\ (Windows), Users//Monero/ (Mac OS X), or home//Monero/ (Linux).
Lastly, note that a blockchain resync is not needed, i.e., it will simply pick up where it left off.

Upgrading (CLI)

You ought to perform the following steps:
  1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the official website, the direct download links in this thread, or Github.
  2. Extract the new binaries to a new directory of your liking.
  3. Copy over the wallet files from the old directory (i.e. the v0.15.x.x or v0.16.0.x directory).
  4. Start monerod and monero-wallet-cli (in case you have to use your wallet).
Note that a blockchain resync is not needed. Thus, if you open monerod-v0.16.0.3, it will simply pick up where it left off.

Release notes (GUI)

  • macOS app is now notarized by Apple
  • CMake improvments
  • Add support for IPv6 remote nodes
  • Add command history to Logs page
  • Add "Donate to Monero" button
  • Indicate probability of finding a block on Mining page
  • Minor bug fixes
Note that you can find a full change log here.

Release notes (CLI)

  • DoS fixes
  • Add option to print daily coin emission and fees in monero-blockchain-stats
  • Minor bug fixes
Note that you can find a full change log here.

Further remarks

  • A guide on pruning can be found here.
  • Ledger Monero users, please be aware that version 1.6.0 of the Ledger Monero App is required in order to properly use CLI or GUI v0.16.

Guides on how to get started (GUI)

https://github.com/monero-ecosystem/monero-GUI-guide/blob/mastemonero-GUI-guide.md
Older guides: (These were written for older versions, but are still somewhat applicable)
Sheep’s Noob guide to Monero GUI in Tails
https://medium.com/@Electricsheep56/the-monero-gui-wallet-broken-down-in-plain-english-bd2889b8c202

Ledger GUI guides:

How do I generate a Ledger Monero wallet with the GUI (monero-wallet-gui)?
How do I restore / recreate my Ledger Monero wallet?

Trezor GUI guides:

How do I generate a Trezor Monero wallet with the GUI (monero-wallet-gui)?
How to use Monero with Trezor - by Trezor
How do I restore / recreate my Trezor Monero wallet?

Ledger & Trezor CLI guides

Guides to resolve common issues (GUI)

My antivirus (AV) software blocks / quarantines the Monero GUI wallet, is there a work around I can utilize?
I am missing (not seeing) a transaction to (in) the GUI (zero balance)
Transaction stuck as “pending” in the GUI
How do I move the blockchain (data.mdb) to a different directory during (or after) the initial sync without losing the progress?
I am using the GUI and my daemon doesn't start anymore
My GUI feels buggy / freezes all the time
The GUI uses all my bandwidth and I can't browse anymore or use another application that requires internet connection
How do I change the language of the 25 word mnemonic seed in the GUI or CLI?
I am using remote node, but the GUI still syncs blockchain?

Using the GUI with a remote node

In the wizard, you can either select Simple mode or Simple mode (bootstrap) to utilize this functionality. Note that the GUI developers / contributors recommend to use Simple mode (bootstrap) as this mode will eventually use your own (local) node, thereby contributing to the strength and decentralization of the network. Lastly, if you manually want to set a remote node, you ought to use Advanced mode. A guide can be found here:
https://www.getmonero.org/resources/user-guides/remote_node_gui.html

Adding a new language to the GUI

https://github.com/monero-ecosystem/monero-translations/blob/masteweblate.md
If, after reading all these guides, you still require help, please post your issue in this thread and describe it in as much detail as possible. Also, feel free to post any other guides that could help people.
submitted by dEBRUYNE_1 to Monero [link] [comments]

Adding cover artwork to CDI disc images for GDEMU/GDMENU

A question came up from u/pvcHook in a recent post about adding artwork to GDI images: can the same be done for games in a CDI format? The answer is yes, and the general process is the same as it is for the GDI games. I've already added all of the appropriate artwork to all of the indie shmup games and all that; can I share those here, or is that a no-no? Because if that's all you're here for it, that would be a lot easier than putting yourself through this process. But it's something to learn, so read on.
First, if you want to do this, you're going to need the proper tools. Someone put together a CDI toolkit (password: DCSTUFF) of sorts on another forum; this is basically the same thing with a few additions and tweaks I've made; before you begin install ISO Buster from the 'isobuster' folder. You will also need the PVR Viewer utility to create the artwork files for the discs. The images you generate will need to be mounted to a virtual drive, so Daemon Tools or some other drive emulation software will also be required. And finally you'll need a copy of DiscJuggler to write your images into a format useable by an emulator or your GDEMU.
EXTRACTION
Here are the general extraction steps, I'll go into a bit more detail after the list:
  1. Copy your CDI image to the 'cdirip' folder in the toolkit and run the 'CDIrip pause.bat' file. Choose an output directory (preferably the 'isofix' folder) and let it rip. You will need to note the LBA info of the tracks being extracted (which is why I made this pause batch file). If only two tracks are extracted, then look closely at the sizes of the sectors that were extracted. If the first track is the larger of the two, then you will not need to use isofix to extract the contents. If the second track is the larger of the two, make note of its LBA value to use with isofix to extract its contents.
  2. Make sure you have installed ISO Buster, you will need it beyond this point.
  3. Go to the 'isofix' folder and you will see the contents of the disc. There will be image files named with the 'TData#.iso' convention and those are what we need to use. The steps diverge a bit from this point depending upon the format of the disc you just extracted; read carefully and follow the instructions for your situation.
  4. If the first track extracted in step one was the larger of the two tracks, open it in ISO Buster and go to step #7.
  5. If the second track extracted in step one was the larger of the two tracks, open a command prompt in 'isofix' (shift+right click) and type "isofix.exe TData#.iso" and give the utility the LBA you noted in step 1 when prompted for it. This will dump a new iso file into the folder called 'fixed.iso'. Open 'fixed.iso' in ISO Buster and go to step #7.
  6. If CDIrip extracted a bunch of wave files and a 'TData#.iso' file, the disc you extracted uses CDDA. Open a command prompt in 'isofix' (shift+right click) and type "isofix.exe TData#.iso" and give the utility the LBA you noted in step 1 when prompted for it. This will dump a new iso file into the folder called 'fixed.iso'. Open 'fixed.iso' in ISO Buster and go to step #7.
  7. In the left pane of ISO Buster you'll see the file structure of the iso file you opened; expand the tree until you see a red 'iso' icon and click on it. This should open up the files and folders within it in the right pane. Highlight all of these files, right click and choose 'Extract Objects'; choose the 'discroot' folder in the CDI toolkit.
Your CDI image is now extracted. Please note that all of the indie releases from NGDEV.TEAM, Hucast.Net, and Duranik use the CDDA format. You'll see the difference when it's time to rebuild the disc image. Also, if you're using PowerShell and not command prompt, the prompts to run the command line utilities are a bit different; you would need to type out '.\isofix' (minus quotes) to execute isofix, for example.
COVER ART CREATION
There are other guides out there concerned with converting cover art files into the PVR format that the Dreamcast and GDEMU/GDMenu use, so I won't go into great detail about that here. I will note, however, that I generally load games up in Redream at least once so it fetches the cover art for the games. They are very good quality sources, and they're 512x512 so won't lose any quality when you reduce them to 256x256 for the GDMenu.
I will say, however, that a lot of the process in the guide I linked to is optional; you can simply open the source file in PVR Viewer and save it as a .pvr file and it will be fine. But feel free to get as detailed as you like with it.
REBUILDING
Once you have your cover art to your liking, make sure it's been placed in the 'discroot' folder and you can begin the image rebuilding process.
We'll start with an image that doesn't use CDDA:
  1. Check the 'discroot' folder for two files: 1ST_READ.BIN and IP.BIN. Select them, then copy and paste them into the 'binhack32' folder in the toolkit. Run the binhack32.exe application in the 'binhack32' folder (you may have to tweak your antivirus settings to do this).
  2. Binhack32 will prompt you to "enter name of binary": this is 1ST_READ.BIN, type it correctly and remember it is case sensitive. Once you enter the binary, you will be prompted to "enter name of bootsector": this is IP.BIN, again type correctly and remember case.
  3. The next prompt will ask you to update the LBA value of the binaries. Enter zero ( 0 ) for this value, since we are removing the preceding audio session track and telling the binaries to start from the beginning of the disc. Once the utility is done, select the two bin files, then cut and paste them back into the 'discroot' folder; overwrite when prompted.
  4. Open the 'bootdreams' folder and start up the BootDreams.exe executable. Before doing anything click on the "Extras" entry in the menu bar, and hover over "Dummy file"; some options will pop out. If you are burning off the discs for any reason, be sure to use one of the options, 650MB or 700MB. If you aren't burning them, still consider using the dummy data. It will compress down to nothing if you're saving these disc images for archival reasons.
  5. Click on the far left icon on the top of BootDreams, the green DiscJuggler icon. Open or drag'n'drop the 'discroot' folder into the "selfboot folder" field, and add whatever label you want for the disc (limited to 8 characters, otherwise you'll get an error). Change disc format to 'data/data', then click on the process button.
  6. If you get a prompt asking to scramble the binary, say no. Retail games that run off of Katana or Windows CE binaries don't need to be scrambled; if this is a true homebrew application or game, then it might need to be scrambled.
  7. Choose an output location for the CDI image, and let the utilities go to work. If everything was set up properly you'll get a new disc image with cover art. I always boot the CDI up in RetroArch or another emulator to make sure it's valid and runs as expected so you don't waste time transferring a bad dump to your GDEMU (or burning a bad disc).
If your game uses CDDA, the process involves a few more steps, but it's nothing terribly complicated:
  1. Check the 'discroot' folder for the IP.BIN file. If it's there, everything is good, continue on to the next step. If it's not there, look in the 'isofix' directory: there should be a file called "bootsector.bin" in that folder. Copy that file and paste it into the 'discroot' folder, then rename it IP.BIN (all caps, even the file extension). Now you're good, go on to the next step.
  2. Remember all those files dumped into the 'isofix' directory? Go look at them now. Copy/cut and paste all of those wave files from 'isofix' into the 'bootdreams/cdda' folder.
  3. Start up the bootdreams.exe executable from the 'bootdreams' folder.
  4. Select the middle icon at the top of the BootDreams window, the big red 'A' for Alcohol 120% image. Once you've selected this, click on 'Extras' up in the menu bar and make sure the 'Add CDDA tracks' option is selected (has a check mark next to it).
  5. Open/drag'n'drop the finished 'discroot' folder into the selfboot folder field; put whatever name you'd like for the disc in the CD label field. Click on the process button.
  6. If you get a prompt asking to scramble the binary, say no. Retail games that run off of Katana or Windows CE binaries don't need to be scrambled; if this is a true homebrew application or game, then it might need to be scrambled.
  7. A window showing you the audio files in the 'cdda' folder will pop up. Highlight all of them in the left pane and click the right-pointing arrow in the middle of the two fields to add them to the project. Make sure they are in order! Then click on OK. The audio files are converted to the appropriate raw format and the process continues. Choose an output location for the MDS/MDF files.
  8. When the files are finished, find them and mount them into a virtual drive (with Daemon Tools or whatever utility you prefer). Open up DiscJuggler and we'll make a CDI image.
  9. Start a new project in DiscJuggler (File > New, then choose 'Create disc images' from the menu). Choose your virtual drive with mounted image in the source field, and set your file output in the destination field. Click the Advanced tab above, and make sure 'Overburn disc' is selected. Click Start to begin converting into a CDI image.
  10. When DiscJuggler is done, close it down, unmount and delete the MDS/MDF files created by BootDreams, and test your CDI image with RetroArch or another emulator before transferring it to your GDEMU.
If you have followed these steps and the disc image will absolutely not boot, then it's possible that a certain disc layout is required and must be used. I have only run into this a few times, but in this situation you simply need to use the 'audio/data' option for the CDI image in Bootdreams to put the image back together. Please note: if you are going to try to build the image with the 'audio/data' option, then make sure you replace the IP.BIN file in the 'discroot' folder with the original, unmodified bootsector.bin file in the 'isofix' folder. The leading audio track is a set size, and the IP.BIN will be expecting this; remember, the IP.BIN modified by binhack32 changes the LBA value of the file and it won't work properly with the audio/data method.
These methods have worked for me each and every time I've wanted to add artwork to a CDI image, and it should work for you as well. This will also keep the original IP.BIN files from the discs, so it should keep anything that references this information intact (like the cover art function in Redream). If it doesn't, then the rebuilt images with artwork can be used on your GDEMU and you can keep the original disc images to use in Redream or wherever.
Let me know if anything is unclear and I can clean the guide up a bit. Or if I can just share the link to my Drive with the images done and uploaded!
submitted by king_of_dirt to dreamcast [link] [comments]

GSAT linux live cd (how to easily and safely stress test memory)

Skip to the bottom if you don't care about the technicalities of how this was made.
I stumbled upon this thread over at overclock.net featuring a linux live cd that has GSAT built in. I decided to try and improve upon this despite my very limited linux knowledge and managed to create a fully automatic linux live cd image that automatically runs GSAT once you boot your PC from it meaning you don't have to fear corrupting your windows install when testing memory stability unlike with windows based RAM testers and because it's GSAT it should be atleast as reliable as any windows based utility. Google themselves developed this and use this to test memory along with Asus.
This is how I made this:
I started by downloading a fresh 64bit TinyCore linux image from here (CorePure64-10.1.iso). I also downloaded the image made by ToBeOC and extracted the compiled stressapptest binary from /uslocal/bin (using 7-Zip). Then I extracted boot/corepure64.gz from the clean TinyCore image I downloaded previously and moved that over to a Ubuntu 19.10 virtual machine where I did the following:
  1. Created a new folder (called 123) on my desktop and moved corepure64.gz there and opened a terminal window where I first switched directories to my newly created folder with cd 123 and then switched to root with sudo su.
  2. Extracted corepure64.gz with the following command: zcat corepure64.gz | cpio -i -H newc -d (which I found here)
  3. Opened the file explorer with root permissions by running this command: nautilus
  4. Navigated to /home/useDesktop/123/uslocal/bin in the file explorer.
  5. Copied the stresstestapp binary over to that directory and made it executable by right clicking on it, going to properties, opening the Permissions tab and and checking "Allow executing file as program".
  6. Navigated to /home/useDesktop/123/etc/profile.d and placed a file called gsat.sh which I made there which I also marked as executable just like in the previous step. This is just a text file which you can open in notepad++ and edit if you wish. Make sure to save it with linux file endings (Edit > EOL Conversion in notepad++) if you edit it!
  7. Blanked out /home/useDesktop/123/etc/motd (this step isn't necessary just removes the TinyCore linux motd).
  8. Opened the 123 folder on my desktop again and deleted the old corepure64.gz
  9. Repacked corepure64.gz by running the following command using the terminal window I opened previously that was already in the right directory and running as root: find | cpio -o -H newc | gzip -2 > /home/useDesktop/corepure64.gz
  10. Moved the new corepure64.gz back to Windows.
In Windows I then used UltraISO to open the clean CorePure64-10.1.iso file and there opened the boot directory where I dragged and dropped the new corepure64.gz file replacing the old one. I then opened the isolinux directory and extracted the isolinux.cfg file, opened that in notepad++ and changed prompt 1 to prompt 0 and then moved that back in and replaced the old isolinux.cfg file. Then I simply choose Save As in UltraISO and saved the modified iso file.
The final product is just 15mb in size and can be flashed to any usb drive using Rufus. I tested this iso file in a virtual machine but also on 2 different physical machines once flashed to a usb drive (my main Ryzen rig and an older Intel PC).

You can download the final iso file from here: https://drive.google.com/uc?id=1TyeNihg6bKIrmyNwtJ7Fc3asD7XBnXsq&export=download

Here's how to use it:
  1. Download Rufus and flash the iso file to an empty usb flash drive.
  2. Reboot your PC and enter your BIOS (this is usually done by spamming the DEL key while your PC is booting up).
  3. Make sure secure boot is disabled (probably already is) and that CSM is Enabled. Check your motherboard manual which you can find online or google for more indepth instructions.
  4. Save your changes by pressing F10 after which your PC will reboot. Now you need to access your PCs boot menu which is usually F8 but not always, again check your motherboard manual for the exact key. You can also re-enter your BIOS and look for a boot override option or change your boot order. Pick your usb flash drive and boot your PC from it.
  5. That's it. The stress test will automatically start and you can let it run for as long as you wish. I recommend running it over night for a throughout test but a quick 1 hour test should also suffice. Once you are ready to stop the test press CTRL+C to see the results. If it says PASS that means no errors were detected. If it says FAIL errors were detected and your memory settings aren't stable.
Here's a quick screen capture of what it looks like: https://streamable.com/4v06w
Lastly I want to thank ToBeOC for doing all the heavy lifting. And if anyone reading this has more experience with linux and in particular remastering a TinyCore linux iso by all means release a iso done "right" since this is a just a dirty mash up and the best I managed with my limited skills. I just wanted something that anyone with zero linux experience can use where you don't have to remember any commands just plug a usb stick in and boot from it.
submitted by 4wh457 to Amd [link] [comments]

Vault 7 - CIA Hacking Tools Revealed

Vault 7 - CIA Hacking Tools Revealed
March 07, 2017
from Wikileaks Website


https://preview.redd.it/9ufj63xnfdb41.jpg?width=500&format=pjpg&auto=webp&s=46bbc937f4f060bad1eaac3e0dce732e3d8346ee

Press Release
Today, Tuesday 7 March 2017, WikiLeaks begins its new series of leaks on the U.S. Central Intelligence Agency.
Code-named "Vault 7" by WikiLeaks, it is the largest ever publication of confidential documents on the agency.
The first full part of the series, "Year Zero", comprises 8,761 documents and files from an isolated, high-security network situated inside the CIA's Center for Cyber Intelligence (below image) in Langley, Virgina.
It follows an introductory disclosure last month of CIA targeting French political parties and candidates in the lead up to the 2012 presidential election.
Recently, the CIA lost control of the majority of its hacking arsenal including,
  1. malware
  2. viruses
  3. trojans
  4. weaponized "zero day" exploits
  5. malware remote control systems

...and associated documentation.
This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA.
The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.
"Year Zero" introduces the scope and direction of the CIA's global covert hacking program, its malware arsenal and dozens of "zero day" weaponized exploits against a wide range of U.S. and European company products, include,

  1. Apple's iPhone
  2. Google's Android
  3. Microsoft's Windows
  4. Samsung TVs,

...which are turned into covert microphones.
Since 2001 the CIA has gained political and budgetary preeminence over the U.S. National Security Agency (NSA).
The CIA found itself building not just its now infamous drone fleet, but a very different type of covert, globe-spanning force - its own substantial fleet of hackers.
The agency's hacking division freed it from having to disclose its often controversial operations to the NSA (its primary bureaucratic rival) in order to draw on the NSA's hacking capacities.
By the end of 2016, the CIA's hacking division, which formally falls under the agency's Center for Cyber Intelligence (CCI - below image), had over 5000 registered users and had produced more than a thousand,
hacking systems trojans viruses,
...and other "weaponized" malware.


https://preview.redd.it/3jsojkqxfdb41.jpg?width=366&format=pjpg&auto=webp&s=e92eafbb113ab3e972045cc242dde0f0dd511e96

Such is the scale of the CIA's undertaking that by 2016, its hackers had utilized more codes than those used to run Facebook.
The CIA had created, in effect, its "own NSA" with even less accountability and without publicly answering the question as to whether such a massive budgetary spend on duplicating the capacities of a rival agency could be justified.
In a statement to WikiLeaks the source details policy questions that they say urgently need to be debated in public, including whether the CIA's hacking capabilities exceed its mandated powers and the problem of public oversight of the agency.
The source wishes to initiate a public debate about the security, creation, use, proliferation and democratic control of cyberweapons.
Once a single cyber 'weapon' is 'loose' it can spread around the world in seconds, to be used by rival states, cyber mafia and teenage hackers alike.

Julian Assange, WikiLeaks editor stated that,
"There is an extreme proliferation risk in the development of cyber 'weapons'.
Comparisons can be drawn between the uncontrolled proliferation of such 'weapons', which results from the inability to contain them combined with their high market value, and the global arms trade.
But the significance of 'Year Zero' goes well beyond the choice between cyberwar and cyberpeace. The disclosure is also exceptional from a political, legal and forensic perspective."

Wikileaks has carefully reviewed the "Year Zero" disclosure and published substantive CIA documentation while avoiding the distribution of 'armed' cyberweapons until a consensus emerges on the technical and political nature of the CIA's program and how such 'weapons' should analyzed, disarmed and published.

Wikileaks has also decided to Redact (see far below) and Anonymize some identifying information in "Year Zero" for in depth analysis. These redactions include ten of thousands of CIA targets and attack machines throughout,
Latin America Europe the United States

While we are aware of the imperfect results of any approach chosen, we remain committed to our publishing model and note that the quantity of published pages in "Vault 7" part one ("Year Zero") already eclipses the total number of pages published over the first three years of the Edward Snowden NSA leaks.

Analysis

CIA malware targets iPhone, Android, smart TVs
CIA malware and hacking tools are built by EDG (Engineering Development Group), a software development group within CCI (Center for Cyber Intelligence), a department belonging to the CIA's DDI (Directorate for Digital Innovation).
The DDI is one of the five major directorates of the CIA (see above image of the CIA for more details).
The EDG is responsible for the development, testing and operational support of all backdoors, exploits, malicious payloads, trojans, viruses and any other kind of malware used by the CIA in its covert operations world-wide.
The increasing sophistication of surveillance techniques has drawn comparisons with George Orwell's 1984, but "Weeping Angel", developed by the CIA's Embedded Devices Branch (EDB), which infests smart TVs, transforming them into covert microphones, is surely its most emblematic realization.
The attack against Samsung smart TVs was developed in cooperation with the United Kingdom's MI5/BTSS.
After infestation, Weeping Angel places the target TV in a 'Fake-Off' mode, so that the owner falsely believes the TV is off when it is on. In 'Fake-Off' mode the TV operates as a bug, recording conversations in the room and sending them over the Internet to a covert CIA server.
As of October 2014 the CIA was also looking at infecting the vehicle control systems used by modern cars and trucks. The purpose of such control is not specified, but it would permit the CIA to engage in nearly undetectable assassinations.
The CIA's Mobile Devices Branch (MDB) developed numerous attacks to remotely hack and control popular smart phones. Infected phones can be instructed to send the CIA the user's geolocation, audio and text communications as well as covertly activate the phone's camera and microphone.
Despite iPhone's minority share (14.5%) of the global smart phone market in 2016, a specialized unit in the CIA's Mobile Development Branch produces malware to infest, control and exfiltrate data from iPhones and other Apple products running iOS, such as iPads.
CIA's arsenal includes numerous local and remote "zero days" developed by CIA or obtained from GCHQ, NSA, FBI or purchased from cyber arms contractors such as Baitshop.
The disproportionate focus on iOS may be explained by the popularity of the iPhone among social, political, diplomatic and business elites.
A similar unit targets Google's Android which is used to run the majority of the world's smart phones (~85%) including Samsung, HTC and Sony. 1.15 billion Android powered phones were sold last year.
"Year Zero" shows that as of 2016 the CIA had 24 "weaponized" Android "zero days" which it has developed itself and obtained from GCHQ, NSA and cyber arms contractors.
These techniques permit the CIA to bypass the encryption of, WhatsApp
  1. Signal
  2. Telegram
  3. Wiebo
  4. Confide
  5. Cloackman
...by hacking the "smart" phones that they run on and collecting audio and message traffic before encryption is applied.
CIA malware targets Windows, OSx, Linux, routers
The CIA also runs a very substantial effort to infect and control Microsoft Windows users with its malware.
This includes multiple local and remote weaponized "zero days", air gap jumping viruses such as "Hammer Drill" which infects software distributed on CD/DVDs, infectors for removable media such as USBs, systems to hide data in images or in covert disk areas ("Brutal Kangaroo") and to keep its malware infestations going.
Many of these infection efforts are pulled together by the CIA's Automated Implant Branch (AIB), which has developed several attack systems for automated infestation and control of CIA malware, such as "Assassin" and "Medusa".
Attacks against Internet infrastructure and webservers are developed by the CIA's Network Devices Branch (NDB).
The CIA has developed automated multi-platform malware attack and control systems covering Windows, Mac OS X, Solaris, Linux and more, such as EDB's "HIVE" and the related "Cutthroat" and "Swindle" tools, which are described in the examples section far below.
CIA 'hoarded' vulnerabilities ("zero days")
In the wake of Edward Snowden's leaks about the NSA, the U.S. technology industry secured a commitment from the Obama administration that the executive would disclose on an ongoing basis - rather than hoard - serious vulnerabilities, exploits, bugs or "zero days" to Apple, Google, Microsoft, and other US-based manufacturers.
Serious vulnerabilities not disclosed to the manufacturers places huge swathes of the population and critical infrastructure at risk to foreign intelligence or cyber criminals who independently discover or hear rumors of the vulnerability.
If the CIA can discover such vulnerabilities so can others.
The U.S. government's commitment to the Vulnerabilities Equities Process came after significant lobbying by US technology companies, who risk losing their share of the global market over real and perceived hidden vulnerabilities.
The government stated that it would disclose all pervasive vulnerabilities discovered after 2010 on an ongoing basis.
"Year Zero" documents show that the CIA breached the Obama administration's commitments. Many of the vulnerabilities used in the CIA's cyber arsenal are pervasive and some may already have been found by rival intelligence agencies or cyber criminals.
As an example, specific CIA malware revealed in "Year Zero" is able to penetrate, infest and control both the Android phone and iPhone software that runs or has run presidential Twitter accounts.
The CIA attacks this software by using undisclosed security vulnerabilities ("zero days") possessed by the CIA but if the CIA can hack these phones then so can everyone else who has obtained or discovered the vulnerability.
As long as the CIA keeps these vulnerabilities concealed from Apple and Google (who make the phones) they will not be fixed, and the phones will remain hackable.
The same vulnerabilities exist for the population at large, including the U.S. Cabinet, Congress, top CEOs, system administrators, security officers and engineers.
By hiding these security flaws from manufacturers like Apple and Google the CIA ensures that it can hack everyone at the expense of leaving everyone hackable.
'Cyberwar' programs are a serious proliferation risk
Cyber 'weapons' are not possible to keep under effective control.
While nuclear proliferation has been restrained by the enormous costs and visible infrastructure involved in assembling enough fissile material to produce a critical nuclear mass, cyber 'weapons', once developed, are very hard to retain.
Cyber 'weapons' are in fact just computer programs which can be pirated like any other. Since they are entirely comprised of information they can be copied quickly with no marginal cost.
Securing such 'weapons' is particularly difficult since the same people who develop and use them have the skills to exfiltrate copies without leaving traces - sometimes by using the very same 'weapons' against the organizations that contain them.
There are substantial price incentives for government hackers and consultants to obtain copies since there is a global "vulnerability market" that will pay hundreds of thousands to millions of dollars for copies of such 'weapons'.
Similarly, contractors and companies who obtain such 'weapons' sometimes use them for their own purposes, obtaining advantage over their competitors in selling 'hacking' services.
Over the last three years the United States intelligence sector, which consists of government agencies such as the CIA and NSA and their contractors, such as Booz Allan Hamilton, has been subject to unprecedented series of data exfiltrations by its own workers.
A number of intelligence community members not yet publicly named have been arrested or subject to federal criminal investigations in separate incidents.
Most visibly, on February 8, 2017 a U.S. federal grand jury indicted Harold T. Martin III with 20 counts of mishandling classified information.
The Department of Justice alleged that it seized some 50,000 gigabytes of information from Harold T. Martin III that he had obtained from classified programs at NSA and CIA, including the source code for numerous hacking tools.
Once a single cyber 'weapon' is 'loose' it can spread around the world in seconds, to be used by peer states, cyber mafia and teenage hackers alike.
U.S. Consulate in Frankfurt is a covert CIA hacker base
In addition to its operations in Langley, Virginia the CIA also uses the U.S. consulate in Frankfurt as a covert base for its hackers covering Europe, the Middle East and Africa.
CIA hackers operating out of the Frankfurt consulate ("Center for Cyber Intelligence Europe" or CCIE) are given diplomatic ("black") passports and State Department cover.
The instructions for incoming CIA hackers make Germany's counter-intelligence efforts appear inconsequential: "Breeze through German Customs because you have your cover-for-action story down pat, and all they did was stamp your passport" Your Cover Story (for this trip) Q: Why are you here? A: Supporting technical consultations at the Consulate. Two earlier WikiLeaks publications give further detail on CIA approaches to customs and secondary screening procedures.
Once in Frankfurt CIA hackers can travel without further border checks to the 25 European countries that are part of the Shengen open border area - including France, Italy and Switzerland.
A number of the CIA's electronic attack methods are designed for physical proximity.
These attack methods are able to penetrate high security networks that are disconnected from the internet, such as police record database. In these cases, a CIA officer, agent or allied intelligence officer acting under instructions, physically infiltrates the targeted workplace.
The attacker is provided with a USB containing malware developed for the CIA for this purpose, which is inserted into the targeted computer. The attacker then infects and exfiltrates data to removable media.
For example, the CIA attack system Fine Dining, provides 24 decoy applications for CIA spies to use.
To witnesses, the spy appears to be running a program showing videos (e.g VLC), presenting slides (Prezi), playing a computer game (Breakout2, 2048) or even running a fake virus scanner (Kaspersky, McAfee, Sophos).
But while the decoy application is on the screen, the underlying system is automatically infected and ransacked.
How the CIA dramatically increased proliferation risks
In what is surely one of the most astounding intelligence own goals in living memory, the CIA structured its classification regime such that for the most market valuable part of "Vault 7", the CIA's, weaponized malware (implants + zero days) Listening Posts (LP) Command and Control (C2) systems, ...the agency has little legal recourse.
The CIA made these systems unclassified.
Why the CIA chose to make its cyber-arsenal unclassified reveals how concepts developed for military use do not easily crossover to the 'battlefield' of cyber 'war'.
To attack its targets, the CIA usually requires that its implants communicate with their control programs over the internet.
If CIA implants, Command & Control and Listening Post software were classified, then CIA officers could be prosecuted or dismissed for violating rules that prohibit placing classified information onto the Internet.
Consequently the CIA has secretly made most of its cyber spying/war code unclassified. The U.S. government is not able to assert copyright either, due to restrictions in the U.S. Constitution.
This means that cyber 'arms' manufactures and computer hackers can freely "pirate" these 'weapons' if they are obtained. The CIA has primarily had to rely on obfuscation to protect its malware secrets.
Conventional weapons such as missiles may be fired at the enemy (i.e. into an unsecured area). Proximity to or impact with the target detonates the ordnance including its classified parts. Hence military personnel do not violate classification rules by firing ordnance with classified parts.
Ordnance will likely explode. If it does not, that is not the operator's intent.
Over the last decade U.S. hacking operations have been increasingly dressed up in military jargon to tap into Department of Defense funding streams.
For instance, attempted "malware injections" (commercial jargon) or "implant drops" (NSA jargon) are being called "fires" as if a weapon was being fired.
However the analogy is questionable.
Unlike bullets, bombs or missiles, most CIA malware is designed to live for days or even years after it has reached its 'target'. CIA malware does not "explode on impact" but rather permanently infests its target. In order to infect target's device, copies of the malware must be placed on the target's devices, giving physical possession of the malware to the target.
To exfiltrate data back to the CIA or to await further instructions the malware must communicate with CIA Command & Control (C2) systems placed on internet connected servers.
But such servers are typically not approved to hold classified information, so CIA command and control systems are also made unclassified.
A successful 'attack' on a target's computer system is more like a series of complex stock maneuvers in a hostile take-over bid or the careful planting of rumors in order to gain control over an organization's leadership rather than the firing of a weapons system.
If there is a military analogy to be made, the infestation of a target is perhaps akin to the execution of a whole series of military maneuvers against the target's territory including observation, infiltration, occupation and exploitation.
Evading forensics and anti-virus
A series of standards lay out CIA malware infestation patterns which are likely to assist forensic crime scene investigators as well as, Apple
  1. Microsoft
  2. Google
  3. Samsung
  4. Nokia
  5. Blackberry
  6. Siemens
  7. anti-virus companies,
...attribute and defend against attacks.
"Tradecraft DO's and DON'Ts" contains CIA rules on how its malware should be written to avoid fingerprints implicating the "CIA, US government, or its witting partner companies" in "forensic review".
Similar secret standards cover the, use of encryption to hide CIA hacker and malware communication (pdf) describing targets & exfiltrated data (pdf) executing payloads (pdf) persisting (pdf), ...in the target's machines over time.
CIA hackers developed successful attacks against most well known anti-virus programs.
These are documented in, AV defeats Personal Security Products Detecting and defeating PSPs PSP/DebuggeRE Avoidance For example, Comodo was defeated by CIA malware placing itself in the Window's "Recycle Bin". While Comodo 6.x has a "Gaping Hole of DOOM".
CIA hackers discussed what the NSA's "Equation Group" hackers did wrong and how the CIA's malware makers could avoid similar exposure.

Examples

The CIA's Engineering Development Group (EDG) management system contains around 500 different projects (only some of which are documented by "Year Zero") each with their own sub-projects, malware and hacker tools.
The majority of these projects relate to tools that are used for,
penetration infestation ("implanting") control exfiltration
Another branch of development focuses on the development and operation of Listening Posts (LP) and Command and Control (C2) systems used to communicate with and control CIA implants.
Special projects are used to target specific hardware from routers to smart TVs.
Some example projects are described below, but see the table of contents for the full list of projects described by WikiLeaks' "Year Zero".
UMBRAGE
The CIA's hand crafted hacking techniques pose a problem for the agency.
Each technique it has created forms a "fingerprint" that can be used by forensic investigators to attribute multiple different attacks to the same entity.
This is analogous to finding the same distinctive knife wound on multiple separate murder victims. The unique wounding style creates suspicion that a single murderer is responsible.
As soon one murder in the set is solved then the other murders also find likely attribution.
The CIA's Remote Devices Branch's UMBRAGE group collects and maintains a substantial library of attack techniques 'stolen' from malware produced in other states including the Russian Federation.
With UMBRAGE and related projects the CIA cannot only increase its total number of attack types but also misdirect attribution by leaving behind the "fingerprints" of the groups that the attack techniques were stolen from.
UMBRAGE components cover,
keyloggers
  1. password collection
  2. webcam capture
  3. data destruction
  4. persistence
  5. privilege escalation
  6. stealth
  7. anti-virus (PSP) avoidance
  8. survey techniques

Fine Dining
Fine Dining comes with a standardized questionnaire i.e menu that CIA case officers fill out.
The questionnaire is used by the agency's OSB (Operational Support Branch) to transform the requests of case officers into technical requirements for hacking attacks (typically "exfiltrating" information from computer systems) for specific operations.
The questionnaire allows the OSB to identify how to adapt existing tools for the operation, and communicate this to CIA malware configuration staff.
The OSB functions as the interface between CIA operational staff and the relevant technical support staff.
Among the list of possible targets of the collection are,
  • 'Asset'
  • 'Liason Asset'
  • 'System Administrator'
  • 'Foreign Information Operations'
  • 'Foreign Intelligence Agencies'
  • 'Foreign Government Entities'
Notably absent is any reference to extremists or transnational criminals. The 'Case Officer' is also asked to specify the environment of the target like the type of computer, operating system used, Internet connectivity and installed anti-virus utilities (PSPs) as well as a list of file types to be exfiltrated like Office documents, audio, video, images or custom file types.
The 'menu' also asks for information if recurring access to the target is possible and how long unobserved access to the computer can be maintained.
This information is used by the CIA's 'JQJIMPROVISE' software (see below) to configure a set of CIA malware suited to the specific needs of an operation.
Improvise (JQJIMPROVISE)
  1. 'Improvise' is a toolset for configuration, post-processing, payload setup and execution vector
  2. selection for survey/exfiltration tools supporting all major operating systems like,
  3. Windows (Bartender)
  4. MacOS (JukeBox)
  5. Linux (DanceFloor)
  6. Its configuration utilities like Margarita allows the NOC (Network Operation Center) to customize tools
based on requirements from 'Fine Dining' questionnaires.
HIVE
HIVE is a multi-platform CIA malware suite and its associated control software.
The project provides customizable implants for Windows, Solaris, MikroTik (used in internet routers) and Linux platforms and a Listening Post (LP)/Command and Control (C2) infrastructure to communicate with these implants.
The implants are configured to communicate via HTTPS with the webserver of a cover domain; each operation utilizing these implants has a separate cover domain and the infrastructure can handle any number of cover domains.
Each cover domain resolves to an IP address that is located at a commercial VPS (Virtual Private Server) provider.
The public-facing server forwards all incoming traffic via a VPN to a 'Blot' server that handles actual connection requests from clients.
It is setup for optional SSL client authentication: if a client sends a valid client certificate (only implants can do that), the connection is forwarded to the 'Honeycomb' toolserver that communicates with the implant.
If a valid certificate is missing (which is the case if someone tries to open the cover domain website by accident), the traffic is forwarded to a cover server that delivers an unsuspicious looking website.
The Honeycomb toolserver receives exfiltrated information from the implant; an operator can also task the implant to execute jobs on the target computer, so the toolserver acts as a C2 (command and control) server for the implant.
Similar functionality (though limited to Windows) is provided by the RickBobby project.
See the classified user and developer guides for HIVE.

Frequently Asked Questions

Why now?
WikiLeaks published as soon as its verification and analysis were ready. In February the Trump administration has issued an Executive Order calling for a "Cyberwar" review to be prepared within 30 days.
While the review increases the timeliness and relevance of the publication it did not play a role in setting the publication date.
Redactions
Names, email addresses and external IP addresses have been redacted in the released pages (70,875 redactions in total) until further analysis is complete. Over-redaction: Some items may have been redacted that are not employees, contractors, targets or otherwise related to the agency, but are, for example, authors of documentation for otherwise public projects that are used by the agency.
Identity vs. person: the redacted names are replaced by user IDs (numbers) to allow readers to assign multiple pages to a single author. Given the redaction process used a single person may be represented by more than one assigned identifier but no identifier refers to more than one real person.
Archive attachments (zip, tar.gz, ...), are replaced with a PDF listing all the file names in the archive. As the archive content is assessed it may be made available; until then the archive is redacted.
Attachments with other binary content, are replaced by a hex dump of the content to prevent accidental invocation of binaries that may have been infected with weaponized CIA malware. As the content is assessed it may be made available; until then the content is redacted.
Tens of thousands of routable IP addresses references, (including more than 22 thousand within the United States) that correspond to possible targets, CIA covert listening post servers, intermediary and test systems, are redacted for further exclusive investigation.
Binary files of non-public origin, are only available as dumps to prevent accidental invocation of CIA malware infected binaries.
Organizational Chart
The organizational chart (far above image) corresponds to the material published by WikiLeaks so far.
Since the organizational structure of the CIA below the level of Directorates is not public, the placement of the EDG and its branches within the org chart of the agency is reconstructed from information contained in the documents released so far.
It is intended to be used as a rough outline of the internal organization; please be aware that the reconstructed org chart is incomplete and that internal reorganizations occur frequently.
Wiki pages
"Year Zero" contains 7818 web pages with 943 attachments from the internal development groupware. The software used for this purpose is called Confluence, a proprietary software from Atlassian.
Webpages in this system (like in Wikipedia) have a version history that can provide interesting insights on how a document evolved over time; the 7818 documents include these page histories for 1136 latest versions.
The order of named pages within each level is determined by date (oldest first). Page content is not present if it was originally dynamically created by the Confluence software (as indicated on the re-constructed page).
What time period is covered?
The years 2013 to 2016. The sort order of the pages within each level is determined by date (oldest first).
WikiLeaks has obtained the CIA's creation/last modification date for each page but these do not yet appear for technical reasons. Usually the date can be discerned or approximated from the content and the page order.
If it is critical to know the exact time/date contact WikiLeaks.
What is "Vault 7"
"Vault 7" is a substantial collection of material about CIA activities obtained by WikiLeaks.
When was each part of "Vault 7" obtained?
Part one was obtained recently and covers through 2016. Details on the other parts will be available at the time of publication.
Is each part of "Vault 7" from a different source?
Details on the other parts will be available at the time of publication.
What is the total size of "Vault 7"?
The series is the largest intelligence publication in history.
How did WikiLeaks obtain each part of "Vault 7"?
Sources trust WikiLeaks to not reveal information that might help identify them.
Isn't WikiLeaks worried that the CIA will act against its staff to stop the series?
No. That would be certainly counter-productive.
Has WikiLeaks already 'mined' all the best stories?
No. WikiLeaks has intentionally not written up hundreds of impactful stories to encourage others to find them and so create expertise in the area for subsequent parts in the series. They're there.
Look. Those who demonstrate journalistic excellence may be considered for early access to future parts.
Won't other journalists find all the best stories before me?
Unlikely. There are very considerably more stories than there are journalists or academics who are in a position to write them.
submitted by CuteBananaMuffin to conspiracy [link] [comments]

How To Install GTA IV In Windows 10✔ Evernote encrypted notes and secure notebooks with pkiNote Free Edition Setting up the perfect Windows 10 Installation  Faster ... How to Decrypt the Encrypted Files and Folders in Windows ... Encrypt and Decrypt notes using Saferoom - encryption for Evernote Install any driver in Windows (uses Command prompt ... Setting Up a Flask Application in PyCharm Installing CMake in 2 minutes on Windows - YouTube How To Fix Camera / Webcam Not Working in Windows 10 [3 Fixes] Encrypt Evernote notes with Saferoom Windows

If you have already installed, there is no need to reinstall. You just need to add it to your path. You will find yourself doing this for many of the tools for the mean stack so you should get used to doing it. You don't want to have to be in the folder that holds the executable to run it. Using Python on Windows ... The Python Launcher for Windows will be installed according to the option at the bottom of the first page. The standard library, test suite, launcher and pip will be installed . If selected, the install directory will be added to your PATH. Shortcuts will only be visible for the current user. Selecting “Customize installation” will allow you to select the ... The gacutil tool is included with your Visual C# installation.For example, if you have installed Visual Studio 2010 in C:\Program Files, the path to gacutil is C:\Program Files\Microsoft SDKs\Windows\v7.0A\bin\gacutil.exe. Once installed in the GAC, the assemblies will always be located correctly without having to set environment variables or copy them into the same directory as an executable. [2/3/08 18:19:23:319 IST] 0000005f SystemErr R com.ibm.websphere.manageme nt.excepti on.AdminEx ception: The application [xxx]App was installed using zero binary copy option. Therefore it is not possible to perform any operation on this application that involves accessing the application metadata or EAR file. 6 Tools to Forcefully Terminate a Full Screen Application or Game with Hotkey Remotely Enable or ... for winabler , it works great, helped me a lot, such a good thing i could find it, otherwise i would have installed windows 10 again , because of a grayed button. Reply. smayer97 3 years ago. Just found a note for myself that reminded me that you need to run Windows Enabler 1.1 as Admistrator ... Prevent a binary from being compressed when using upx. This is typically used if upx corrupts certain binaries during compression. FILE is the filename of the binary without path. This option can be used multiple times. Windows and Mac OS X specific options¶-c, --console, --nowindowed Open a console window for standard i/o (default). On Windows this option will have no effect if the first ... An application’s entry in the Wine application database sometimes contains information about the necessary version of Wine you’ll need. Running an Application Once you’ve got Wine installed, you can download an application’s EXE or MSI (Microsoft Installer) file and double-click it — just like you would if you were using Windows — to run it with Wine. When an application is installed using this option it is not possible to perform any operation on this application using wsadmin or administrative console that involves accessing the application metadata or EAR file. Such operations include view/edit application information, export, export DDL etc. The only possible operations using wsadmin or admin console are start, stop and uninstall. If ... For example, if you have installed Visual C# 9.0 in C:\Program Files, the path to gacutil is. C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin\gacutil.exe. Once installed in the GAC, the assemblies will always be located correctly without having to set environment variables or copy them into the same directory as an executable. Using Windows Event Forwarding, it is possible for Windows Servers (called Event Source Computers) to forward events to a central Windows Server where FortiSIEM Windows Agent (called Event Collector Computer) is running. The Agent can then send to FortiSIEM Collector, Worker, and Supervisor nodes. This is an alternative to running FortiSIEM Agent on every Windows Server. The disadvantage of ...

[index] [8299] [11328] [26860] [19371] [1218] [13048] [24388] [22102] [4789] [254]

How To Install GTA IV In Windows 10✔

This is a demonstration of how to set up a Flask application with the PyCharm Community Edition IDE. A written article about this topic is available at https... then there you find many download options go to torrentproject.se and download the torrent using magnet download. Hope that you will be understand. METHOD - HOW TO INSTALL GTA IV!! 1. Download ... This is a short video about the manual installation of CMake. CMake is an open-source and cross-platform build system. You can download it here : https://cmake.... By using this method you can decrypt the files and folders for windows 10. Encrypted files and folders are meant to protect any intrusion or unwanted access ... This demo shows how to create personal encrypted notes in Evernote using pkiNote Free Edition. NOW THERE IS MUCH BETTER ALTERNATIVE - SAFEROOM Download Saferoom Windows Desktop: Saferoom x64 ... A general robust method to install any kind of device driver is discussed using the pnputil.exe command in command prompt of windows. Life gets easy after knowi... Then move down and make your that the application using camera has access to the camera. Turn it on if it don’t have access. And also, turn on the option “Allow desktop apps to access your ... In this video, I go over setting up the perfect windows 10 installation. This isn't like other YouTube videos as I give practical advice on how to speed up your... In this video we're going to demonstrate how Saferoom Windows can be used to encrypt your Evernote note including attachments. Saferoom Desktop is a client application, and it requires only ... Saferoom is installed on top of online platforms (like Evernote, Onenote and etc.), thus users continue to work with their apps, only having an additional option to protect their private and ...

http://binary-optiontrade.matchmin.tk