
Registered since September 28th, 2017
Has a total of 4246 bookmarks.
Showing top Tags within 17 bookmarks
howto information development guide reference administration design website software solution service product online business uk tool company linux code server system application web list video marine create data experience description tutorial explanation technology build blog article learn world project boat download windows security lookup free performance javascript technical network control beautiful support london tools course file research purchase library programming image youtube example php construction html opensource quality install community computer profile feature power browser music platform mobile work user process database share manage hardware professional buy industry internet dance advice installation developer 3d search camera material access customer travel test standard review documentation css money engineering webdesign engine develop device photography digital api speed source program management phone discussion question event client story simple water marketing yacht app content setup package fast idea interface account communication cheap compare script study market easy live google resource operation startup monitor training
Tag selected: replication.
Looking up replication tag. Showing 17 results. Clear
Saved by uncleflo on December 19th, 2019.
This article will walk you through setting up a server with Python 3, MySQL, and Apache2, sans the help of a framework. By the end of this tutorial, you will be fully capable of launching a barebones system into production. Django is often the one-shop-stop for all things Python; it’s compatible with nearly all versions of Python, comes prepackaged with a custom server, and even features a one-click-install database. Setting up a vanilla system without this powerful tool can be tricky, but earns you invaluable insight into server structure from the ground up. This tutorial uses only package installers, namely apt-get and Pip. Package installers are simply small programs that make code installations much more convenient and manageable. Without them, maintaining libraries, modules, and other code bits can become an extremely messy business.
tutorial python apache cluster server database configure browser package replication installation configuration automation install howto explanation information lookup framework development web development web administration libraries
Saved by uncleflo on July 24th, 2019.
You can create an Amazon Aurora MySQL DB cluster as a Read Replica in a different AWS Region than the source DB cluster. Taking this approach can improve your disaster recovery capabilities, let you scale read operations into an AWS Region that is closer to your users, and make it easier to migrate from one AWS Region to another. For each source DB cluster, you can have up to five cross-region DB clusters that are Read Replicas. When you create an Aurora MySQL DB cluster Read Replica in another AWS Region, you should be aware of some pitfalls.
replica cluster replication aurora endpoint amazon specify instance monitoring enhanced source promote cross browser database region multi compute migrate aws recovery administration developer howto
Saved by uncleflo on July 24th, 2019.
Many customers want a disaster recovery environment, and they want to use this environment daily and know that it's in sync with and can support a production workload. This leads them to an active-active architecture. In other cases, users like Netflix and Lyft are distributed over large geographies. In these cases, multi-region active-active deployments are not optional. Designing these architectures is more complicated than it appears, as data being generated at one end needs to be synced with data at the other end. There are also consistency issues to consider. One needs to make trade-off decisions on cost, performance, and consistency. Further complicating matters is the variety of data stores used in the architecture results in a variety replication methods. In this session, we explore how to design an active-active multi-region architecture using AWS services, including Amazon Route 53, Amazon RDS multi-region replication, AWS DMS, and Amazon DynamoDB Streams. We discuss the challenges, trade-offs, and solutions.
disaster aws recovery environment day sync production workload architecture netflix geography region deployment design consistent complication performance store replication method session amazon route data rds challenge trade solution multi-region administration youtube movie guide
Saved by uncleflo on July 11th, 2019.
It’s a few weeks after AWS re:Invent 2018 and my head is still spinning from all of the information released at this year’s conference. This year I was able to enjoy a few sessions focused on Aurora deep dives. In fact, I walked away from the conference realizing that my own understanding of High Availability (HA), Disaster Recovery (DR), and Durability in Aurora had been off for quite a while. Consequently, I decided to put this blog out there, both to collect the ideas in one place for myself, and to share them in general. Unlike some of our previous blogs, I’m not focused on analyzing Aurora performance or examining the architecture behind Aurora. Instead, I want to focus on how HA, DR, and Durability are defined and implemented within the Aurora ecosystem. We’ll get just deep enough into the weeds to be able to examine these capabilities alone.
aurora durability replication workload redundancy configuration diligence database automation informative replica latency storage layer cluster duplicate priority transaction development administration blog article discussion realize system define examine high availability disaster recovery dr explanation
Saved by uncleflo on July 11th, 2019.
News, articles and tools covering Amazon Web Services (AWS), including S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, Route 53, CloudFront, Lambda, VPC, Cloudwatch, Glacier and more. Just a curiosity. At this point I've done: single staging environment for small teams, and that obviously doesn't work well beyond a couple; one off EC2 instances that just use docker to make a mini-replicated environment; cloudformation to spin up an entire, and yet smaller, version of our production; ngrok and just having our QA test directly. The area I'm interested to see how people are handling it is with respect to data. Are you replicating the full DB? Just using in memory and some seed data? Additionally, for those of you that have this (or have had one in the past), how did you guys set it up to be "pleasurable" to work with? By that I mean it not being some long winding rube goldberg machine that almost defeats the purpose of having a CI.
sandbox setup snapshot replication deploy stack account environment instance control machine aws amazon production quality assurance handle administration howto guide read forum question team
Saved by uncleflo on June 23rd, 2019.
Following, you can find a description of Amazon Aurora Global Database. Each Aurora global database spans multiple AWS Regions, enabling low latency global reads and disaster recovery from region-wide outages. An Aurora global database consists of one primary AWS Region where your data is mastered, and one read-only, secondary AWS Region. Aurora replicates data to the secondary AWS Region with typical latency of under a second. You issue write operations directly to the primary DB instance in the primary AWS Region. An Aurora global database uses dedicated infrastructure to replicate your data, leaving database resources available entirely to serve application workloads. Applications with a worldwide footprint can use reader instances in the secondary AWS Region for low latency reads. In the unlikely event your database becomes degraded or isolated in an AWS region, you can promote the secondary AWS Region to take full read-write workloads in under a minute.
aurora cluster database endpoint latency snapshot amazon query console replication footprint synchronize bucket secondary instance primary compatibility relational capability website failover mariadb howto isolated resource application workcreate enable browser read parameter outage administration aws documentation
Saved by uncleflo on June 23rd, 2019.
A lot of RDS's documentation about read replicas contains a magical step along the lines of "direct database traffic to the new master." For instance, their instructions on implementing failure recovery say: This talk about directing traffic glosses over what is actually a complicated step, though. If I were using EC2 instances to host my database, I could give them elastic IPs, use the public DNS address of the instance to address it (which resolves to its private IP from inside AWS), and then instantly swap my entire stack to the read replica by reassigning the elastic IP (and thus simultaneously reassigning the public DNS). I used this method happily back in the days when RDS was considered straightforwardly inferior to rolling your own database instance on EC2 by many DBAs. RDS instances still cannot have elastic IPs, though, so I cannot use this particular trick to magically redirect all my database traffic to a new instance when using RDS.
downtime unscheduled overflow fault deploy database answer server documentation automatic failover address reassign method rds inferior complicated disaster recovery instance host aurora instruction replication switch administration cloud aws howto discussion question
Saved by uncleflo on June 23rd, 2019.
Lab-2: Below is the steps that we had followed to setup Route 53 failover and achive disaster recovery of Application and RDS database. We will examine the Primary Region 1 and what to do, the Secondary Region 2 and the steps there, Failover route 53 from one region to another and set it up, and Test your failover.
test administration failover disaster recover route application rds database region examine follow setup replication solution amazon read video youtube watch howto explanation development
Saved by uncleflo on May 27th, 2019.
Let's learn Amazon Aurora Database from scratch. What are new features of Aurora over RDS? How to achieve HA with Aurora Cluster? What are different types of Endpoints with Aurora? Aurora compatibility with MySQL & PostgreSQL. Reader, Writer & Custom Endpoints in Aurora Cluster.
database check understand achieve architect administrator practical exercise amazon watch cluster learn db howto solution problem advanced administration infrastructure system replication available
Saved by uncleflo on May 12th, 2019.
If you are using mostly open source in your enterprise, and have few MS SQL server database around, you might want to consider migrating those to MySQL database. We can migrate MS SQL database to MySQL using migration module of “MySQL Workbench” utility. Download and install this MySQL Installer, which includes Workbench and other necessary connectors and drivers required for the migration.
workbench schema migrate migration database server setup wizard install manually please destination replication convert connect parameter replica table administration business license cost efficient howto require consider module article reference tutorial connector
Saved by uncleflo on January 3rd, 2019.
A-Z Windows Commands, Batch files, Dos and PowerShell. Fsutil.exe is a built in filesystem tool that is useful to do file system related operations from command line. We can create a file of required size using this tool. The above commands create a 1 MB file dummy.txt within few seconds. If you want to create 1 GB file you need to change the second command as below.
dummy append file folder byte files batch replication disk iteration loop development administration windows command tool large test testing
Saved by uncleflo on June 27th, 2018.
How can you bring out MySQL’s full power? With High Performance MySQL, you’ll learn advanced techniques for everything from designing schemas, indexes, and queries to tuning your MySQL server, operating system, and hardware to their fullest potential. This guide also teaches you safe and practical ways to scale applications through replication, load balancing, high availability, and failover.
amazon mysql power performance technique design index query tune server system hardware potential practical scale application replication balance availability failover infrastructure advanced optimization backup high performance buy product book information developer administration
Saved by uncleflo on March 23rd, 2018.
MariaDB Galera Cluster is a synchronous multi-master cluster for MariaDB. It is available on Linux only, and only supports the XtraDB/InnoDB storage engines (although there is experimental support for MyISAM - see the wsrep_replicate_myisam system variable). Starting with MariaDB 10.1, the wsrep API for Galera Cluster is included by default. This is available as a separate download for MariaDB 10.0 and MariaDB 5.5.
cluster galera mariadb master db linux support engine parallel replication download software storage administration development code application data
Saved by uncleflo on February 26th, 2018.
At work, we run a simple high-availability (HA) MariaDB setup that consists of an active master that handles all read and write queries from our applications, a passive master that can take over for the active master at any time, and a read-only replication slave (not shown) that we use for backups and analytics. Replication is configured so that the active master follows the passive master, the passive master follows the active master, and the analytics slave follows one of the masters. For the remainders of this post, I will refer to the active master as the master and the passive master as the standby. The benefits of this master-master configuration is that it allows us not only to failover from master to standby if the master becomes unhealthy, but also allows us to perform patching, reboots, lengthy migrations, and other kinds of database maintenance without impacting our users. Well, almost...
high availability mariadb master slave query db dbms backup replication failover painless administration configuration perform patch reboot application remain impact user configure maintain
Saved by uncleflo on February 25th, 2018.
This is a follow-up blog post that expands on the subject of highly available cluster, discussed in MariaDB MaxScale High Availability: Active-Standby Cluster. Replication Manager is a tool that manages MariaDB 10 clusters. It supports both interactive and automated failover of the master server. It verifies the integrity of the slave servers before promoting one of them as the replacement master and it also protects the slaves by automatically setting them into read-only mode. You can find more information on the replication-manager from the replication-manager GitHub repository. Using Replication Manager allows us to automate the replication failover. This reduces the amount of manual work required to adapt to changes in the cluster topology and makes for a more highly available database cluster. In this blog post, we'll cover the topic of backend database HA and we’ll use Replication Manager to create a complete HA solution. We build on the setup described in the earlier blog post and integrate Replication Manager into it. We're using Centos 7 as our OS and we'll use the 0.7.0-rc2 version of the replication-manager.
replication cluster high availability manager manage administration topology master slave automatic failover failsafe integration howto article description use software mariasql verify repository expansion readonly server information standby blog post subject discuss backend solution
Saved by uncleflo on June 21st, 2017.
Gérald Oster is an Associate Professor at TELECOM Nancy, University of Lorraine since 2006. He is a member of the Inria Coast project-team. He has an expertise in distributed collaborative systems with a focus on content replication mechanisms and their applicability. He received his Ph.D. in Computer Science from Nancy University in 2005. During his Ph.D., he worked on verification of correctness of a family of optimistic replication mechanisms (operational transformation) dedicated to collaborative editing. He proposed a framework based on an automated theorem prover and several sets of verified transformation functions for multiple data types. He worked on the design and the implementation of a universal file synchronizer. He is one of the pioneers of the CRDT approach as he participated in the design of the WOOT algorithm that initiated researches on these distinctive data structures. He is currently investigating the limitations and the applicability in diverse domains of these novel replicated data structures. Gérald is or was involved in several research projects and participated in several technologies transfer-oriented projects.
research university replication project team collaborative system distributed computer science teamwork website protocol
Saved by uncleflo on September 20th, 2013.
The Z file system, originally developed by Sun™, is designed to use a pooled storage method in that space is only used as it is needed for data storage. It is also designed for maximum data integrity, supporting data snapshots, multiple copies, and data checksums. It uses a software data replication model, known as RAID-Z. RAID-Z provides redundancy similar to hardware RAID, but is designed to prevent data write corruption and to overcome some of the limitations of hardware RAID.
zfs file system linux install booteable boot kenel howto guide handbook storage method integrity copy checksum mirror replication data raid snapshot space redundancy advanced
No further bookmarks found.