no.. The add call is necessary for Hmaster TableDescriptor cache update. I see that the change is done only to the htable descriptors... a FileSystem delete is missing. !!!
./zahoor On Thu, Aug 9, 2012 at 3:05 PM, J Mohamed Zahoor <[email protected]> wrote: > Hi > > Hmmm.. Finding more and more if i dig... > In file MasterFileSystem > > public HTableDescriptor deleteColumn(byte[] tableName, byte[] familyName) > > throws IOException { > > LOG.info("DeleteColumn. Table = " + Bytes.toString(tableName) > > + " family = " + Bytes.toString(familyName)); > > HTableDescriptor htd = this.services > .getTableDescriptors().get(tableName); > > htd.removeFamily(familyName); > > this.services.getTableDescriptors().add(htd); > > return htd; > > } > > > > it should have "this.services.getTableDescriptors().remove(htd)" instead > of "add". > > I think this is the problem... the previous one will create a problem if > this was executed as a Executor service... > > > > NOte: i still dont understand why DeleteColum was not submitted as a > executor service though. > > > ./zahoor > > > > On Thu, Aug 9, 2012 at 2:33 PM, J Mohamed Zahoor <[email protected]> wrote: > >> Hi >> >> I see this atleast in trunk... in TableDeleteFamilyHandler's constructor.. >> >> public TableDeleteFamilyHandler(byte[] tableName, byte [] familyName, >> >> Server server, final MasterServices masterServices) throwsIOException { >> >> super(EventType.C_M_ADD_FAMILY, tableName, server, masterServices); >> >> HTableDescriptor htd = getTableDescriptor(); >> >> this.familyName = hasColumnFamily(htd, familyName); >> >> } >> >> >> The event type is added as C_M_ADD_FAMILY.... >> May be this is the problem and it is there in 0.94.1 also !!! >> >> ./zahoor >> >> >> On Wed, Aug 8, 2012 at 5:33 PM, prem yadav <[email protected]> wrote: >> >>> Hi Stack, >>> we have got a lot of these in the logs after we deleted the column >>> family. >>> >>> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column >>> family ol does not exist in region <region> >>> at >>> >>> org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:4759) >>> at >>> >>> org.apache.hadoop.hbase.regionserver.HRegion.checkFamilies(HRegion.java:2639) >>> at >>> >>> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2065) >>> at >>> >>> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:1962) >>> at >>> >>> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3428) >>> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) >>> at >>> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> at java.lang.reflect.Method.invoke(Method.java:601) >>> at >>> >>> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364) >>> at >>> >>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1386) >>> 2012-08-05 00:11:37,697 DEBUG >>> org.apache.hadoop.hbase.regionserver.HRegion: >>> rollbackMemstore rolled back 0 keyvalues from start:0 to end:1 >>> >>> >>> >>> >>> On Tue, Aug 7, 2012 at 10:27 PM, Ted Yu <[email protected]> wrote: >>> >>> > Looking at TableDeleteFamilyHandler, I see: >>> > >>> > protected void handleTableOperation(List<HRegionInfo> hris) throws >>> > IOException { >>> > // Update table descriptor in HDFS >>> > HTableDescriptor htd = >>> > this.masterServices.getMasterFileSystem().deleteColumn(tableName, >>> > familyName); >>> > // Update in-memory descriptor cache >>> > this.masterServices.getTableDescriptors().add(htd); >>> > >>> > MasterFileSystem.deleteColumn() calls: >>> > >>> > HTableDescriptor htd = >>> > this.services.getTableDescriptors().get(tableName); >>> > htd.removeFamily(familyName); >>> > >>> > I will dig some more. >>> > >>> > But looks like there is a bug. >>> > >>> > On Tue, Aug 7, 2012 at 9:21 AM, Stack <[email protected]> wrote: >>> > >>> > > On Tue, Aug 7, 2012 at 9:59 AM, prem yadav <[email protected]> >>> wrote: >>> > > > Hi Stack, >>> > > > yes . I can still see the column family. >>> > > > >>> > > > we ran the following command too: >>> > > > >>> > > > fs -dus /hbase/<table name>/<region hash>/<cf that was deleted> >>> and it >>> > > > returned a large value which means that the space has not been >>> > reclaimed. >>> > > > The hbase version we are running is 0.94.1. >>> > > > >>> > > > is there a manual way to force reclaim the space? >>> > > > >>> > > >>> > > You could delete the column family from hdfs under each region. >>> > > Anything in the logs around the remove of the column family? Maybe >>> we >>> > > just don't do the delete cleanup in hdfs when we drop a cf (I've not >>> > > checked)? >>> > > >>> > > St.Ack >>> > > >>> > >>> >> >> >
