Artwork

Treść dostarczona przez Oracle Universtity and Oracle Corporation. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Oracle Universtity and Oracle Corporation lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
Player FM - aplikacja do podcastów
Przejdź do trybu offline z Player FM !

Hybrid Columnar Compression & Fast Ingest

17:49
 
Udostępnij
 

Manage episode 439119787 series 3560727
Treść dostarczona przez Oracle Universtity and Oracle Corporation. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Oracle Universtity and Oracle Corporation lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
In this episode, hosts Lois Houston and Nikita Abraham speak with Senior Principal Database & MySQL Instructor Bill Millar about the enhanced performance of Hybrid Columnar Compression, the different compression levels, and how to achieve the best compression for your tables. Then, they delve into Fast Ingest, what’s new in Oracle Database 23ai, and the benefits of these improvements. Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

00:00

Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

00:26

Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.

Nikita: Hi everyone! In our last episode, we spoke about the 23ai improvements in time and data handling and data storage with Senior Principal Instructor Serge Moiseev. Today, we’re going to discuss the enhancements that have been made to the performance of Hybrid Columnar Compression. We'll look at how Hybrid Columnar Compression was prior to 23ai, learn about the changes that have been made, talk about how to use this compression in 23ai, and look at some performance factors. After that, we’ll move on to Fast Ingest, the improvements in 23ai, and how it is managed.

01:15 Lois: Yeah, this is a packed episode and to take us through all this, we have Bill Millar back on the podcast. Bill is a Senior Principal Database & MySQL Instructor with Oracle University. Hi Bill! Thanks for joining us. So, let’s start with how Hybrid Columnar Compression was prior to 23ai. What can you tell us about it?

Bill: We support all kinds of platforms from the Database Enterprise Edition on up to the high engineered systems for that and even the Exadata Cloud at the Customer. We have four different levels of compression. One is considered the warehouse compression where we do a COLUMN STORE COMPRESS FOR QUERY LOW and COLUMN STORE COMPRESS FOR QUERY HIGH. The COLUMN STORE COMPRESS FOR QUERY HIGH is the default, unless another compression level is specified. With the archive compression, we have the COLUMN STORE COMPRESSED FOR ARCHIVE LOW and also COLUMN STORE COMPRESS FOR ARCHIVE HIGH.

With the Hybrid Columnar Compression warehouse and archive, the array inserts are compressed immediately. But, however, some conditions have to be met. It has to be a locally-- to use these, it has to be a locally managed tablespace, the automatic segment space management. And compatibility level, at least 12 too or higher when these values have been introduced. There are different compressors that are used for the compression hidden from the customer. It just depends on what is selected as to what is going to be the compression that's going to be used for-- notice that with the COLUMN STORE FOR QUERY HIGH and for ARCHIVE LOW, the zlib compression method is used, whereas if you select the ARCHIVE HIGH, the Bzip2. And in 19C, we added the Zstandard. And it's available for the MEMORY COMPRESS FOR CAPACITY HIGH.

03:30 Nikita: So, what’s happened in 23ai?

Bill: When in 23c, to take advantage of the changes in compression, the compatibility level has to be set at least to 23.0.0 or higher.

When a table is created or altered with the hybrid column compression, the Zstandard will automatically be selected. So it doesn't matter which one of the four you select, that will be the one that is selected. It is internally set transparent to the user. There is no new SQL format that has to be used in order for the Zstandard compression to be applied.

And the Database Compatibility Mode has to be at least at 23.0.0 or higher. Only then can the format of the Hybrid Column Compression storage use that Zstandard compression. If we already have compressed data blocks in existing tables, they're going to remain in their original format.

04:31 Lois: And are the objects regenerated?

Bill: If the objects are-- they might be regenerated if they were deleted in another operation. If you want to completely take advantage of the new compression, all you have to do is alter table move. And that's going to go ahead and trigger the recompression of that, whereas any newly created tables that are created will use the Zstandard by default.

05:00 Nikita: What are the performance factors we need to think about, Bill?

Bill: There are some performance factors that we do need to consider, the ratio, the amount of space reduction in storage that we're going to achieve, the time spent compressing the data, the CPU cost to compress that data, and also, is there any decompression rate, time spent decompressing the data when we're doing queries on it?

05:24 Lois: And not all tables are equal, are they?

Bill: Not all tables are equal. Some might get better performance by different compression level than others for that. So how we can basically have to test our results, there is a compression advisor that's available, that you can use to give you a recommendation on what compression to use. But only through testing can we really see the availability, the benefits of using that compression for an application.

So best compression, just as in previous versions, the higher the compression levels, the more CPU it's going to use. The higher the compression level, the more space savings that we're going to achieve for that as we are doing those direct path inserts. So there's always that cost.

06:20 Did you know that the Oracle University Learning Community regularly holds live events hosted by Oracle expert instructors. Find out how to prepare for your certification exams. Learn about the latest technology advances and features. Ask questions in real time and learn from an Oracle subject matter expert. From Ask Me Anything about certification to Ask the Instructor coaching sessions, you’ll be able to achieve your learning goals for 2024 in no time. Join a live event today and witness firsthand the transformative power of the Oracle University Learning Community. Visit mylearn.oracle.com to get started.

07:01 Nikita: Welcome back! Let’s now move on to the enhancements that have been made to fast ingest. We’ll begin with an overview of fast ingest, how to use it, and the improvements and benefits. And then we’ll look at some features for managing fast ingest. Bill, why don’t you start by defining fast ingest for us?

Bill: Traditionally the fast ingest, also referred to as deferred inserts, is faster than processing a single row at a time. It can support high-volume transactions like from the Internet of Things applications, where you have hundreds of thousands of items coming in trying to write to the database.

They are faster, because the inserts don't use the traditional buffer cache. They use a pool that will size out of the large pool. And then they're later written to disk using the SMCO, the space management coordinator. Instead of using the buffer cache, they're going to write into an area of the large pool.

The space management coordinator, it has these helper threads, however many-- that's just a number for that-- that will buffer. And as buffer is filled based off size of that algorithm, it will then write those deferred inserts into the database itself.

08:24 Lois: So, do deferred inserts support constraints?

Bill: Deferred writes do support constraints in index just as for regular inserts. However, performance benchmarks that have been done recommend that you disable constraints, if you're going to use the fast ingest.

08:41 Lois: Can you tell us a bit about the streaming and ingest mechanism?

Bill: We declare a table with the memoptimize for write. We can do that in the create table statement, or we can alter the table for that. The data is written to the large pool, unlike traditionally writing items to the buffer cache. It's going to write to the ingest buffer, the large pool. And it's going to be drained. It's going to be written from that area by using those background processes to write to the actual database itself.

So the very high throughput, since drainers issues deferred writes in large batches. So we're not having to wait especially for the buffer cache. OK, I need space. OK, I need to write. I need to free up blocks. Very ideal for these streaming inserts, sensor readings, alarms, door locks. Those type of things.

09:33 Nikita: How does performance improve with this?

Bill: With the benchmarks we have done, we have found that the performance can be up to 75% faster by going ahead and doing the fast ingest versus traditional inserts. The 23 million inserts per second on a single X6-2 server with the benchmarks that we have.

09:58 Nikita: Are there any considerations to keep in mind?

Bill: With the fast ingest, some things to consider for that. The written data, you might need to validate to make sure it's there. So you might have input files that are writing to that that are loading it. You might want to hang on to those, before that data destroyed. Have some kind of mechanism to validate, yes, it was written.

There is a possible loss of data. Why? Because unlike the buffer cache that has the recovery mechanism with the redo and the undo, there is none with that large pool. So that's why if the system crashes, and the buffers haven't been flushed yet, then it's possible loss of data.

There's no queries from the large pool meaning that if I want to query the information that the fast ingest is loading into the table, it doesn't go and see what's sitting in the buffer in the large pool like it does with the buffer cache.

Index and constraints are checked but only at flush time. And the memoptimize pool size is a fixed amount of space that we're going to allocate-- of memory that we're going to allocate to use for the memoptimize write.

We can enable a table for the fast ingest, enable with the memoptimize for write. We can create a table and do it. We can also alter a table. We already have a table existing. All we have to do is alter it. And we want to use that, the fast ingest, for these tables.

11:21 Lois: Do we have options for the writing operation, Bill?

Bill: You do have options for the writing operation. We have the parameters, the memoptimize write where we can turn that on. We can also use it in a hint. It is set at the root level, it. Is not modifiable at the PDB level. It's set at the root level, It is a static parameter. We can also do things in our session. We want to verify, OK, is the memoptimize write on? We can verify a table is enabled.

So with the fast ingest, the data inserts, you can also use a hint. You can also set this at a session level.

If you decide there's something that you don't want to use the memoptimize write for, then you can disable it for a table.

12:11 Nikita: Bill, what are some of the benefits of the enhancements made in 23ai?

Bill: With some of the enhancements-- so now, some table attributes are now supported-- we can now have common default values for a column. We can use transparent data encryption. We can also use the fast inserts, any inline LOBs, along with virtual columns. We've also added partitioning support. We can do subpartitioning and we can also do interval partitioning, along with auto list. So we've added some items that previously prevented us from doing the fast inserts. It does provide additional flexibility, especially with the enhancements and the restrictions that we have removed. It allows to use that fast insert, especially in a data warehouse-type environment. It can also use-- in the Cloud, it can use encrypted tablespaces, because remember, in the Cloud, we always encrypt, by default, users' data. So now, it also gives us the ability to use it in that Cloud environment because of that change.

We have faster background flushing for the loads.

13:36 Lois: And how is it faster now?

Bill: Because we bypassed the traditional buffer cache. Faster ingest for those direct ingest. So again, bypassing the traditional inserts and using the buffer cache gives the ability to bulk load into large pool, then flush to the database so that way, we have access to that data for possible faster analytics of those internet of things, especially when it comes to the temperature of the temperature sensors. We need to know when a temperature of something is out of bounds very quickly. Or maybe it's sensors for security. We need to know when there's a problem with the security.

14:20 Nikita: How difficult is it to manage this?

Bill: Management is fairly simple. We have the MEMOPTIMIZE_WRITE_AREA_SIZE parameter that we're going to say-- it is dynamic. It does not require a restart. However, all instances in a RAC environment must have the same value. So we have the write area. What are we going to set? And then the MEMOPTIMIZE_WRITE, by default, it uses a hint. Or we can go ahead and we can just set that to all.

It is allocated from the large pool. You manually set it. And we can see how much is actually being allocated to the pool. We can go out and look at our alert log for that information.

There's also a view. The MEMOPTIMIZE_WRITE_AREA has some columns. What is the total memory allocated for the large pool? How much is currently used by the fast ingest? How much free space? As you're using it, you might want to go out and do a little checking, or do you have enough space? Are you not allocating enough space? Or have you allocated too much? It'll also show the total number of writes, and also, the number-- the writers is currently the users that are using it.

And the container ID, what is the container within that container database? What's the pluggable or pluggables that's using the fast ingest?

There is a subprogram, the DBMS_MEMOPTIMIZE that we have access to that we possibly can use. So there are some procedures. Here, we can return the rows of the low and high water mark of the sequence numbers. And the key here is across all the sessions. We can see the high water mark, sequence number of the rows written to the large pool for the current session. And we can also flush all the ingest data from the large pool to disk for the current session.

16:26 Lois: What if I want to flush them all for all sessions?

Bill: Well, that's where we have the WRITE_FLUSH procedure. So it's going to flush the fast ingest data of the Memoptimize Rowstore from the large pool for all the sessions. As a DBA, that's one that you most likely will want to be using, especially if it's going to be before I do a shutdown or something along that line.

16:49 Nikita: Ok! On that note, I think we can end this episode. Thank you so much for taking us through all that, Bill.

Lois: Yes, thanks Bill. If you want to learn more about what we discussed today, visit mylearn.oracle.com and search for Oracle Database 23ai New Features for Administrators. Join us next week for a discussion on some more Oracle Database 23ai new features. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off!

17:21 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  continue reading

91 odcinków

Artwork
iconUdostępnij
 
Manage episode 439119787 series 3560727
Treść dostarczona przez Oracle Universtity and Oracle Corporation. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Oracle Universtity and Oracle Corporation lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
In this episode, hosts Lois Houston and Nikita Abraham speak with Senior Principal Database & MySQL Instructor Bill Millar about the enhanced performance of Hybrid Columnar Compression, the different compression levels, and how to achieve the best compression for your tables. Then, they delve into Fast Ingest, what’s new in Oracle Database 23ai, and the benefits of these improvements. Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

00:00

Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

00:26

Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.

Nikita: Hi everyone! In our last episode, we spoke about the 23ai improvements in time and data handling and data storage with Senior Principal Instructor Serge Moiseev. Today, we’re going to discuss the enhancements that have been made to the performance of Hybrid Columnar Compression. We'll look at how Hybrid Columnar Compression was prior to 23ai, learn about the changes that have been made, talk about how to use this compression in 23ai, and look at some performance factors. After that, we’ll move on to Fast Ingest, the improvements in 23ai, and how it is managed.

01:15 Lois: Yeah, this is a packed episode and to take us through all this, we have Bill Millar back on the podcast. Bill is a Senior Principal Database & MySQL Instructor with Oracle University. Hi Bill! Thanks for joining us. So, let’s start with how Hybrid Columnar Compression was prior to 23ai. What can you tell us about it?

Bill: We support all kinds of platforms from the Database Enterprise Edition on up to the high engineered systems for that and even the Exadata Cloud at the Customer. We have four different levels of compression. One is considered the warehouse compression where we do a COLUMN STORE COMPRESS FOR QUERY LOW and COLUMN STORE COMPRESS FOR QUERY HIGH. The COLUMN STORE COMPRESS FOR QUERY HIGH is the default, unless another compression level is specified. With the archive compression, we have the COLUMN STORE COMPRESSED FOR ARCHIVE LOW and also COLUMN STORE COMPRESS FOR ARCHIVE HIGH.

With the Hybrid Columnar Compression warehouse and archive, the array inserts are compressed immediately. But, however, some conditions have to be met. It has to be a locally-- to use these, it has to be a locally managed tablespace, the automatic segment space management. And compatibility level, at least 12 too or higher when these values have been introduced. There are different compressors that are used for the compression hidden from the customer. It just depends on what is selected as to what is going to be the compression that's going to be used for-- notice that with the COLUMN STORE FOR QUERY HIGH and for ARCHIVE LOW, the zlib compression method is used, whereas if you select the ARCHIVE HIGH, the Bzip2. And in 19C, we added the Zstandard. And it's available for the MEMORY COMPRESS FOR CAPACITY HIGH.

03:30 Nikita: So, what’s happened in 23ai?

Bill: When in 23c, to take advantage of the changes in compression, the compatibility level has to be set at least to 23.0.0 or higher.

When a table is created or altered with the hybrid column compression, the Zstandard will automatically be selected. So it doesn't matter which one of the four you select, that will be the one that is selected. It is internally set transparent to the user. There is no new SQL format that has to be used in order for the Zstandard compression to be applied.

And the Database Compatibility Mode has to be at least at 23.0.0 or higher. Only then can the format of the Hybrid Column Compression storage use that Zstandard compression. If we already have compressed data blocks in existing tables, they're going to remain in their original format.

04:31 Lois: And are the objects regenerated?

Bill: If the objects are-- they might be regenerated if they were deleted in another operation. If you want to completely take advantage of the new compression, all you have to do is alter table move. And that's going to go ahead and trigger the recompression of that, whereas any newly created tables that are created will use the Zstandard by default.

05:00 Nikita: What are the performance factors we need to think about, Bill?

Bill: There are some performance factors that we do need to consider, the ratio, the amount of space reduction in storage that we're going to achieve, the time spent compressing the data, the CPU cost to compress that data, and also, is there any decompression rate, time spent decompressing the data when we're doing queries on it?

05:24 Lois: And not all tables are equal, are they?

Bill: Not all tables are equal. Some might get better performance by different compression level than others for that. So how we can basically have to test our results, there is a compression advisor that's available, that you can use to give you a recommendation on what compression to use. But only through testing can we really see the availability, the benefits of using that compression for an application.

So best compression, just as in previous versions, the higher the compression levels, the more CPU it's going to use. The higher the compression level, the more space savings that we're going to achieve for that as we are doing those direct path inserts. So there's always that cost.

06:20 Did you know that the Oracle University Learning Community regularly holds live events hosted by Oracle expert instructors. Find out how to prepare for your certification exams. Learn about the latest technology advances and features. Ask questions in real time and learn from an Oracle subject matter expert. From Ask Me Anything about certification to Ask the Instructor coaching sessions, you’ll be able to achieve your learning goals for 2024 in no time. Join a live event today and witness firsthand the transformative power of the Oracle University Learning Community. Visit mylearn.oracle.com to get started.

07:01 Nikita: Welcome back! Let’s now move on to the enhancements that have been made to fast ingest. We’ll begin with an overview of fast ingest, how to use it, and the improvements and benefits. And then we’ll look at some features for managing fast ingest. Bill, why don’t you start by defining fast ingest for us?

Bill: Traditionally the fast ingest, also referred to as deferred inserts, is faster than processing a single row at a time. It can support high-volume transactions like from the Internet of Things applications, where you have hundreds of thousands of items coming in trying to write to the database.

They are faster, because the inserts don't use the traditional buffer cache. They use a pool that will size out of the large pool. And then they're later written to disk using the SMCO, the space management coordinator. Instead of using the buffer cache, they're going to write into an area of the large pool.

The space management coordinator, it has these helper threads, however many-- that's just a number for that-- that will buffer. And as buffer is filled based off size of that algorithm, it will then write those deferred inserts into the database itself.

08:24 Lois: So, do deferred inserts support constraints?

Bill: Deferred writes do support constraints in index just as for regular inserts. However, performance benchmarks that have been done recommend that you disable constraints, if you're going to use the fast ingest.

08:41 Lois: Can you tell us a bit about the streaming and ingest mechanism?

Bill: We declare a table with the memoptimize for write. We can do that in the create table statement, or we can alter the table for that. The data is written to the large pool, unlike traditionally writing items to the buffer cache. It's going to write to the ingest buffer, the large pool. And it's going to be drained. It's going to be written from that area by using those background processes to write to the actual database itself.

So the very high throughput, since drainers issues deferred writes in large batches. So we're not having to wait especially for the buffer cache. OK, I need space. OK, I need to write. I need to free up blocks. Very ideal for these streaming inserts, sensor readings, alarms, door locks. Those type of things.

09:33 Nikita: How does performance improve with this?

Bill: With the benchmarks we have done, we have found that the performance can be up to 75% faster by going ahead and doing the fast ingest versus traditional inserts. The 23 million inserts per second on a single X6-2 server with the benchmarks that we have.

09:58 Nikita: Are there any considerations to keep in mind?

Bill: With the fast ingest, some things to consider for that. The written data, you might need to validate to make sure it's there. So you might have input files that are writing to that that are loading it. You might want to hang on to those, before that data destroyed. Have some kind of mechanism to validate, yes, it was written.

There is a possible loss of data. Why? Because unlike the buffer cache that has the recovery mechanism with the redo and the undo, there is none with that large pool. So that's why if the system crashes, and the buffers haven't been flushed yet, then it's possible loss of data.

There's no queries from the large pool meaning that if I want to query the information that the fast ingest is loading into the table, it doesn't go and see what's sitting in the buffer in the large pool like it does with the buffer cache.

Index and constraints are checked but only at flush time. And the memoptimize pool size is a fixed amount of space that we're going to allocate-- of memory that we're going to allocate to use for the memoptimize write.

We can enable a table for the fast ingest, enable with the memoptimize for write. We can create a table and do it. We can also alter a table. We already have a table existing. All we have to do is alter it. And we want to use that, the fast ingest, for these tables.

11:21 Lois: Do we have options for the writing operation, Bill?

Bill: You do have options for the writing operation. We have the parameters, the memoptimize write where we can turn that on. We can also use it in a hint. It is set at the root level, it. Is not modifiable at the PDB level. It's set at the root level, It is a static parameter. We can also do things in our session. We want to verify, OK, is the memoptimize write on? We can verify a table is enabled.

So with the fast ingest, the data inserts, you can also use a hint. You can also set this at a session level.

If you decide there's something that you don't want to use the memoptimize write for, then you can disable it for a table.

12:11 Nikita: Bill, what are some of the benefits of the enhancements made in 23ai?

Bill: With some of the enhancements-- so now, some table attributes are now supported-- we can now have common default values for a column. We can use transparent data encryption. We can also use the fast inserts, any inline LOBs, along with virtual columns. We've also added partitioning support. We can do subpartitioning and we can also do interval partitioning, along with auto list. So we've added some items that previously prevented us from doing the fast inserts. It does provide additional flexibility, especially with the enhancements and the restrictions that we have removed. It allows to use that fast insert, especially in a data warehouse-type environment. It can also use-- in the Cloud, it can use encrypted tablespaces, because remember, in the Cloud, we always encrypt, by default, users' data. So now, it also gives us the ability to use it in that Cloud environment because of that change.

We have faster background flushing for the loads.

13:36 Lois: And how is it faster now?

Bill: Because we bypassed the traditional buffer cache. Faster ingest for those direct ingest. So again, bypassing the traditional inserts and using the buffer cache gives the ability to bulk load into large pool, then flush to the database so that way, we have access to that data for possible faster analytics of those internet of things, especially when it comes to the temperature of the temperature sensors. We need to know when a temperature of something is out of bounds very quickly. Or maybe it's sensors for security. We need to know when there's a problem with the security.

14:20 Nikita: How difficult is it to manage this?

Bill: Management is fairly simple. We have the MEMOPTIMIZE_WRITE_AREA_SIZE parameter that we're going to say-- it is dynamic. It does not require a restart. However, all instances in a RAC environment must have the same value. So we have the write area. What are we going to set? And then the MEMOPTIMIZE_WRITE, by default, it uses a hint. Or we can go ahead and we can just set that to all.

It is allocated from the large pool. You manually set it. And we can see how much is actually being allocated to the pool. We can go out and look at our alert log for that information.

There's also a view. The MEMOPTIMIZE_WRITE_AREA has some columns. What is the total memory allocated for the large pool? How much is currently used by the fast ingest? How much free space? As you're using it, you might want to go out and do a little checking, or do you have enough space? Are you not allocating enough space? Or have you allocated too much? It'll also show the total number of writes, and also, the number-- the writers is currently the users that are using it.

And the container ID, what is the container within that container database? What's the pluggable or pluggables that's using the fast ingest?

There is a subprogram, the DBMS_MEMOPTIMIZE that we have access to that we possibly can use. So there are some procedures. Here, we can return the rows of the low and high water mark of the sequence numbers. And the key here is across all the sessions. We can see the high water mark, sequence number of the rows written to the large pool for the current session. And we can also flush all the ingest data from the large pool to disk for the current session.

16:26 Lois: What if I want to flush them all for all sessions?

Bill: Well, that's where we have the WRITE_FLUSH procedure. So it's going to flush the fast ingest data of the Memoptimize Rowstore from the large pool for all the sessions. As a DBA, that's one that you most likely will want to be using, especially if it's going to be before I do a shutdown or something along that line.

16:49 Nikita: Ok! On that note, I think we can end this episode. Thank you so much for taking us through all that, Bill.

Lois: Yes, thanks Bill. If you want to learn more about what we discussed today, visit mylearn.oracle.com and search for Oracle Database 23ai New Features for Administrators. Join us next week for a discussion on some more Oracle Database 23ai new features. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off!

17:21 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  continue reading

91 odcinków

Wszystkie odcinki

×
 
Loading …

Zapraszamy w Player FM

Odtwarzacz FM skanuje sieć w poszukiwaniu wysokiej jakości podcastów, abyś mógł się nią cieszyć już teraz. To najlepsza aplikacja do podcastów, działająca na Androidzie, iPhonie i Internecie. Zarejestruj się, aby zsynchronizować subskrypcje na różnych urządzeniach.

 

Skrócona instrukcja obsługi